On this point strategy and kindness agree: First, offer the olive branch. Only when that offer is rejected should you go to war.
Put another way:
In the presence of other factors, you have a choice to make. The more cohesive your group, the more obvious your answer will be. But if it truly is a dilemma, then it’s already too late.
From Team of Teams: New Rules of Engagement for a Complex World, by General Stanley McChrystal, Tantum Collins, David Silverman, and Chris Fussel:
When they understood the whole picture, they began to trust their colleagues. Much like the prisoners deciding whether or not to rat, our commanders’ responsiveness to such demands grew as they came to understand the greater environment in which the decision had been made, and the people receiving what had been taken away. Previously, the world outside of a commander’s domain looked like a black box; once an asset left, it was just gone. Once they could see why and how their assets were being used, however, and once they knew and respected the other individuals handling these tools, things changed.
Before, these decisions took place behind closed doors. Now, the resourcing conversations sometimes occurred right in front of them during an O&I. “When we started constantly talking at lower levels of the organization,” explains an enlisted SEAL who worked with the Task Force in Iraq, “we could basically see where the fight was hot, where it wasn’t, and where people needed ISR the most. Plus, we could see that it was actually to our benefit sometimes to surrender that asset.” With that awareness came a faith that when theirs was the priority mission, they would get what they needed when they needed it. Holistic understanding of the enterprise now permeated the ranks.
As person-to-person relationships across the enterprise deepened, unit commanders gave away prized assets, often to the initial surprise and frustration of those below them, because they trusted that the asset would be used in a context even more critical than their current situation. Moreover, they began to see the favor being repaid in kind. This fostered trust in the other unit among even the most skeptical, hardened, competitive operators. Suddenly, we were overcoming our Prisoner’s Dilemma.
We had worked out our solutions to the Task Force’s Prisoner’s Dilemma by trial and error, but we later learned that game theory scholars shared our conclusions. In 1980, Robert Axelrod, a professor of political science at the University of Michigan, solicited programs for an iterative computer Prisoner’s Dilemma tournament. The fourteen entries in the original first round—submitted by leading game theorists across a spectrum of disciplines, including economics, psychology, mathematics, and political science—varied greatly in initial strategy and complexity of coding. However, the winning strategy contained just four lines of code. Submitted by University of Toronto professor Anatol Rapoport, the program was called Tit for Tat. The strategy always began with cooperating, and then simply did what the other player did on the previous move, cooperating if the other cooperated, defecting if the other defected. It did not hold a grudge: if its opponent began to cooperate again after defecting, Tit for Tat would also return to cooperation. A second round of the tournament was held, and many more entries were submitted. Again, Rapoport’s simple strategy won out. The program succeeded because it defaulted to trusting, cooperative behavior, and punished the other player for selfish behavior. However, as one peace and conflict studies expert has since noted, “the punishment lasted only as long as the selfish behavior lasted. This proved to be an exceptionally effective sanction, quickly showing the other side the advantages of cooperating.”