Casus Belli Engineering

DRAFT — Last updated on

Casus Belli Development

Few things in a professional environment are more important than a lasting impression; be it for building trust or conveying unappreciated quality, it is often what kills any system: people lose confidence in it. Imagine seeing something always faulty; a stakeholder sees a failed commitment. They do not see, and cannot see, the distinction between the feature that failed and the foundation it rests upon. To them, the system is monolithic; if any part fails, the whole is suspect. This perception, though technically naive, creates social stress that technical accuracy cannot dispel.

As failures accumulate, pressure builds; someone must be responsible, and something must be done. The organization demands resolution, not in the form of root cause analysis or targeted fixes, but in the form of visible action, decisive change and ritual purification. The tension must be released.

What follows is as old as human society itself: the stressed group selects a victim, to which the guilt is assigned, and finally, the victim is destroyed. Through its destruction, social cohesion is restored. The Aztecs sacrificed captives atop pyramids to ensure the sun would rise. We sacrifice codebases in conference rooms to ensure projects will ship. The mechanism is identical; only the altar has changed.

René Girard observed that human communities in crisis often resolve internal conflict through scapegoating: the selection of a victim to bear collective guilt, whose expulsion restores order. The scapegoat need not be guilty; it need only be acceptable as a target. Its guilt is constructed through narrative, not discovered through investigation (see [5], [6]).

Some dangerous individuals, however, institutionalize such ritualistic practices into what I call Casus Belli Engineering: the use of perceived failure as pretext to replace working systems with one's preferred worldview. The broken feature is the crisis that demands resolution. The foundation becomes the scapegoat, selected not for its vulnerability and the convenience of its replacement. And in most cases, this unfolds organically, driven by genuine belief in the narrative. These individuals are truly alchemists at heart; they have the power to manipulate the phantoms of lasting impressions to their favor [14]. They do not wait for crisis, they nourish it. They do not stumble into scapegoating, they engineer it. They fabricate casus belli deliberately, using the ancient machinery of collective violence to remake systems in their own image. These are not confused engineers making honest mistakes in attribution. These are political operators who have discovered that technical failure can be converted into organizational power.

The danger here is not the scapegoating itself; humans will scapegoat at all times. The danger is those who have learned to trigger the mechanism strategically, who can reliably convert any failure into an opportunity to destroy what exists and build what they prefer. They are the high priests of a secular religion, and their rituals shape our technological landscape more than any technical merit.

The Scapegoat Mechanism

In software organizations, the pattern unfolds in the very same way. Here’s what follows to be my perception of it.

As failures create tension, demand explanation and threatens careers, rather than confront the actual causes (which might implicate recent decisions, current leadership, or systemic issues), the organization selects a scapegoat that must be:

  1. Plausibly connected to the failure: It need not be the cause, but it must be in the vicinity. A dependency, a framework, an architectural pattern.

  2. Unable to defend itself: Either because it is old ("legacy"), unfashionable ("outdated"), or championed by people who have left the organization.

  3. Replaceable with something the accusers prefer: This is critical. The scapegoat's destruction must enable the birth of the accuser's alternative.

Once selected, the scapegoat is ritually condemned. Its guilt is established through repeated assertion. "We keep having problems because of X." The actual problems (error handling, testing, operational concerns) fade into the background. X becomes the problem. X must be destroyed.

This is Casus Belli Engineering: the use of a tangential failure as pretext to replace working systems with one's preferred worldview. The broken feature is the casus belli, the justification for war. But the war's objective has nothing to do with the stated cause. The war is about replacing one paradigm with another, using failure as political cover.

The Pattern

The pattern unfolds predictably:

  1. A feature breaks repeatedly. Usually because of poor integration with external systems, inadequate error handling, or environmental issues (network, third-party APIs, deployment infrastructure).

  2. The feature depends on some foundational component. This component works correctly; it has always worked correctly. The failures are not caused by it. But it exists in the dependency chain.

  3. Someone decides the foundational component is "the problem." Not the actual source of failures, but the foundation itself. The architecture. The paradigm. The way things are done.

  4. The real failures become ammunition. "We keep having issues with X" becomes "X is built on Y and Y is the problem." The actual causes (external dependencies, error handling, testing gaps) are ignored in favor of a narrative that indicts the foundation.

  5. A replacement is proposed. The replacement always happens to align with the proposer's preferred technologies, methodologies, or architectural patterns. This is not coincidence.

  6. The foundation is sacrificed. Both the broken feature and its working foundation are scrapped. The broken feature "proves" the foundation was wrong all along. That the foundation worked correctly is dismissed as irrelevant; it was "the wrong approach."

This is Casus Belli Development. The broken feature is the pretext. The real goal is replacing something that works with something that reflects the attacker's worldview.

The Psychology of the Hunt

Girard observed that scapegoating requires certain social conditions: crisis, undifferentiated rivalry, and collective mimesis (imitation). Software organizations provide all three (see [5], [6]).

Crisis: The broken feature. The production incident. The customer complaint. Something has failed, visibly, and someone must be held accountable.

Undifferentiated rivalry: Multiple engineers or teams with similar status competing for influence. No clear authority on technical decisions. Everyone has opinions, few have decisive power.

Collective mimesis: Once someone frames a foundation as "the problem," others imitate this judgment. The narrative spreads. Doubt becomes consensus. What started as one person's opinion becomes organizational truth.

Into this environment steps a familiar leadership-risk profile documented in the personality and leadership literature (see [7], [8], [9], [10]). They exhibit:

  • Low critical thinking: Cannot or will not trace actual causation. Accept narrative explanations over evidence. Confuse correlation with causation because the distinction requires rigor they do not possess or value.

  • Medium to high engagement: Not apathetic. Care deeply about outcomes and will fight for their vision. Motivated, persistent, vocal. Their energy makes their narrative compelling.

  • Insecurity about their technical judgment: Need external validation. Need to prove their approach is "right" by making alternatives "wrong." Cannot propose their preferred solution on its merits; must first destroy the existing solution's legitimacy.

This personality type is perfect for initiating the scapegoat mechanism. They have the motivation to identify a target, the rhetorical skill to build a narrative, and the psychological need to see the sacrifice through. The broken feature gives them the crisis they need. Their low critical thinking prevents them from distinguishing the feature's actual causes from the foundation's perceived guilt. Their engagement ensures they will push the narrative until it becomes consensus (see [8], [9], [10]).

And critically, their insecurity demands that they not just propose an alternative, but destroy the existing approach. The scapegoat must die so that their worldview can be validated as the savior.

The Case of Agile: Industrial-Scale Scapegoating

To be precise, the problem is not iterative or incremental development itself. Those ideas are older than Agile, well established, and technically sound. You can find explicit advocacy for IID (iterative and incremental development) decades before Agile branding, including mainstream software engineering literature arguing that large-program design must be incremental because requirements are never complete up front (see [1], [2]).

The problem is what happened at movement scale: Agile discourse became one of the most successful examples of Casus Belli Development in software history. It became a demonstration of Girardian scapegoating at industrial scale.

The crisis: software projects failing. Over budget, over schedule, wrong requirements, poor quality. Real problems requiring real solutions.

The scapegoat: "Waterfall." "Heavyweight processes." "Big upfront design." "Comprehensive documentation." A constellation of practices bundled together and given a name so they could be ritually condemned.

The brilliance was in the selection. "Waterfall" as a term was largely a straw man; few organizations actually practiced pure sequential development as described in the caricature. Even the 1970 Royce paper usually cited as "waterfall" includes explicit iteration and feedback loops rather than strict one-pass sequencing (see [3]). But the label was plausible enough. Projects did fail. There were documentation standards. There were phase gates. The connection could be asserted, and assertion was enough. Context mattered less than rhetorical usefulness. Social pressure had already accumulated; what the narrative needed was a guilty name and a cleansing alternative.

Just like Ron Garret talks about in this article, terms, not ideas, had to leave common speech:

"It is incredibly frustrating watching all this happen. My job today (I am now working on software verification and validation) is to solve problems that can be traced directly back to the use of purely imperative languages with poorly defined semantics like C and C++. (The situation is a little better with Java, but not much.) But, of course, the obvious solution (to use non-imperative languages with well defined semantics like Lisp) is not an option. I can't even say the word Lisp without cementing my reputation as a crazy lunatic who thinks Lisp is the Answer to Everything. So I keep my mouth shut (mostly) and watch helplessly as millions of tax dollars get wasted. (I would pin some hope on a wave of grass-roots outrage over this blatant waste of money coming to the rescue, but, alas, on the scale of outrageous waste that happens routinely in government endeavors this is barely a blip.)"

Ron Garret, "Lisping at JPL," 2002

The manifesto provided the ritual language for the sacrifice.

To be fair, the document itself includes an explicit caveat: there is value on both sides, but more value on the left. Read literally, this is not an absolute rejection of process, documentation, contracts, or plans. The problem is how this language functioned socially.

In practice, the caveat is what disappears. What remains are slogans built on asymmetry:

"Individuals and interactions over processes and tools" becomes a standing suspicion of process itself.

"Working software over comprehensive documentation" becomes a durable excuse to underinvest in documentation until knowledge collapses into oral tradition.

"Customer collaboration over contract negotiation" becomes a way to frame governance and contractual discipline as anti-customer bureaucracy.

"Responding to change over following a plan" becomes rhetorical permission to treat planning as naive, even when disciplined planning is exactly what makes adaptation coherent.

So the issue is not that the manifesto text is verbatim absurd. The issue is that it is rhetorically engineered for movement politics: morally legible, easily memetic, and difficult to oppose without sounding regressive. Each pair supplies a reusable villain class ("processes," "documentation," "contracts," "plans") and a ready moral identity for the alternative. That is why it scales as narrative power even when the underlying engineering ideas were already known (see [1], [2], [4]).

The manifesto did not invent iterative thinking. It provided a casus belli. It gave people permission to replace existing processes by framing those processes as the source of failure. The actual problems (poor requirements gathering, lack of customer access, inadequate testing, unrealistic schedules, management dysfunction) were not addressed. Instead, "waterfall" became the scapegoat, and killing it became the solution.

This is textbook Girardian scapegoating. The community (software industry) faced crisis (failing projects). A victim was selected (waterfall/heavyweight processes). The victim was assigned guilt through repeated assertion. The victim was destroyed (organizations abandoned existing processes). Social cohesion was restored (everyone is now "Agile"). The actual problems persisted, but the ritual had been completed (see [5], [6]).

Agile succeeded at movement scale not because it introduced unprecedented engineering ideas, but because it performed the scapegoat mechanism perfectly. It identified a plausible enemy, constructed its guilt, and offered itself as salvation. That the "new" core was often a rebranding of existing iterative practices did not matter (see [1], [2], [4]). The ritual worked. The scapegoat died. The narrative won.

Why It Works

Casus Belli Development works because it exploits cognitive biases and organizational dynamics:

Availability bias: The recent failure is vivid and memorable. The years of the foundation working correctly are abstract and forgotten. The broken feature becomes the "proof" of systemic problems.

Confirmation bias: Once someone decides the foundation is wrong, they interpret all subsequent issues as validation. Successes are ignored or attributed to "working around" the foundation. Failures are evidence of fundamental flaws.

Status quo bias (inverted): Normally people resist change, but if you can frame the status quo as "failed," people become eager to change. The broken feature demonstrates failure; therefore the foundation must go.

Authority and consensus: In low-trust environments, people defer to confident voices. If someone repeatedly asserts that the foundation is the problem, others will accept it rather than investigate. The narrative becomes consensus, and consensus becomes truth (see [11], [12]).

Sunk cost fallacy (avoided through replacement): Rather than fix the actual problem (which would require admitting the recent failures were addressable), replacing the foundation lets you avoid confronting sunk costs. You are not "fixing mistakes"; you are "adopting better practices."

The Damage

The damage from Casus Belli Development is substantial:

Good systems are destroyed. Foundations that worked, that were well-understood, that had years of refinement, are scrapped because they were adjacent to a failure. The organization loses institutional knowledge and proven solutions.

Actual problems are not solved. The broken feature's real causes (poor error handling, inadequate testing, environmental issues) remain. They will resurface in the new system, because they were never addressed.

Churn and instability. Every few years, a new broken feature provides a new casus belli. The cycle repeats. Foundations are replaced, then replaced again. Nothing stabilizes because stability is confused with stagnation.

Loss of trust. When replacements fail to solve the problems they claimed to address, trust erodes. But rather than recognize the pattern, organizations blame the new foundation and start looking for the next replacement.

Talent attrition. Engineers who understand causation, who can distinguish between correlation and root cause, become frustrated. They leave. What remains are those who excel at political maneuvering disguised as technical leadership.

Recognition and Resistance

How do you recognize Casus Belli Development? Look for these patterns:

The scope of the proposed solution exceeds the scope of the problem. A broken feature that fails due to external API timeouts does not require rewriting the entire service layer in a different language. If the solution is much larger than the problem, suspect ulterior motives.

The failure is used to indict a paradigm rather than a specific implementation. "This OOP code is hard to maintain" becomes "OOP is the problem." "This microservice is hard to debug" becomes "microservices were a mistake." The leap from specific to general is where the casus belli operates.

The proposed replacement aligns suspiciously well with the proposer's expertise or preferences. If the person who has been advocating for GraphQL suddenly decides that a REST API failure proves REST is fundamentally flawed, be skeptical.

The actual root causes are not analyzed rigorously. If the investigation stops at "X is built on Y and therefore Y is wrong," rather than continuing to "X fails because of Z which is unrelated to Y," you are witnessing Casus Belli Development.

The rhetoric emphasizes revolution over evolution. "We need to completely rethink how we do X" rather than "we need to fix this specific issue in X." Revolutionary rhetoric is a tell; it signals that the goal is replacement, not repair.

How do you resist it?

Insist on root cause analysis. What is the actual mechanism of failure? Not "the system is bad," but "this specific call fails because of this specific condition." Causation, not correlation.

Separate the failure from the foundation. Can the failure be fixed without replacing the foundation? If yes, why are we discussing replacement?

Demand that proposals address actual problems. Will the new approach actually solve the root causes? Or will it just move them to a different layer?

Evaluate proposals on their own merits, not as alternatives to "failed" systems. The new approach should stand on its own value, not derive value from tearing down the old approach.

Recognize psychological patterns. Is this person insecure about their technical judgment? Are they trying to validate their worldview rather than solve a problem? Motivation matters (see [8], [9], [10]).

The Agile Postscript

Returning to Agile: what would honest advocacy have looked like?

It would have said: "We have found that iterative development with frequent customer feedback reduces requirement mismatches. Here are case studies. Here are measured outcomes. We propose adopting these practices."

Instead, we got: "Traditional development is broken. It values processes over people. It produces documentation instead of working software. We propose a new paradigm."

The first is an engineering argument. The second is a casus belli. The first might have led to thoughtful adoption of useful practices that were already known in substance. The second led to wholesale replacement of development methodologies with "Agile" frameworks that often retained the worst aspects of what they claimed to replace (rigid processes, now called "ceremonies"; comprehensive documentation, now called "backlogs"; detailed planning, now called "sprint planning").

Agile succeeded not because iterative development was wrong before it arrived, but because Agile branding provided a casus belli for people who wanted paradigm change and needed politically acceptable justification. The manifesto gave them that justification. The broken projects were the evidence. "Waterfall" was the scapegoat. The replacement became inevitable once the narrative was established.

This is how Casus Belli Development works at scale.

A Final Observation

Girard noted that scapegoating requires collective blindness. The community must not recognize the mechanism while it operates. Once you see the scapegoat for what it is (an innocent victim bearing projected guilt), the ritual loses power. But if you believe the scapegoat is genuinely guilty, the mechanism works perfectly (see [6]).

This is why Casus Belli Development persists. Participants do not see themselves as performing a ritual. They believe they are solving problems, making technical decisions, improving the system. The narrative feels true because everyone around them agrees. The scapegoat's guilt feels obvious because it has been asserted so many times.

The pattern continues because it is effective at what it actually does: resolve organizational tension, validate preferred worldviews, enable political change under technical cover. That it does not solve the actual technical problems is irrelevant to its success as a social mechanism (see [11], [12], [13]).

But once you see it, you cannot unsee it. The next time a broken feature leads to calls for replacing the foundation, you will recognize the pattern. The crisis. The scapegoat. The false causation. The preferred alternative waiting in the wings. The ritual language of condemnation.

And you will face a choice: participate in the ritual, or resist it.

Resistance is difficult. It requires insisting on causation when narrative is more compelling. It requires defending systems that have been marked for death. It requires being the person who "doesn't get it," who "resists change," who "defends the status quo."

But resistance is engineering. Engineering is about understanding what actually causes what, about making targeted improvements based on evidence, about distinguishing correlation from causation even when the narrative is seductive.

Casus Belli Development is not engineering. It is politics disguised as engineering, ritual disguised as analysis, scapegoating disguised as problem-solving.

We should choose engineering. Even when the ritual is more socially acceptable. Even when the scapegoat has already been selected. Even when everyone else has agreed on the narrative.

Especially then.

References

  1. Peter Van Roy and Seif Haridi, Concepts, Techniques, and Models of Computer Programming (MIT Press, 2004), ch. 6 "Program design in the large," sec. 6.7.1 "Design methodology" (explicit IID advocacy; notes successful use since at least the 1950s).
  2. Craig Larman and Victor R. Basili, "Iterative and Incremental Development: A Brief History," IEEE Computer 36, no. 6 (2003): 47-56.
  3. Winston W. Royce, "Managing the Development of Large Software Systems," in Proceedings of IEEE WESCON (1970).
  4. Manifesto for Agile Software Development (2001), https://agilemanifesto.org/
  5. Rene Girard, Violence and the Sacred (Johns Hopkins University Press, 1977 [orig. 1972]).
  6. Rene Girard, The Scapegoat (Johns Hopkins University Press, 1986 [orig. 1982]).
  7. Robert Hogan, Gordon J. Curphy, and Joyce Hogan, "What We Know About Leadership: Effectiveness and Personality," American Psychologist 49, no. 6 (1994): 493-504.
  8. Robert Hogan and Joyce Hogan, "Assessing Leadership: A View from the Dark Side," International Journal of Selection and Assessment 9, no. 1-2 (2001): 40-51.
  9. Robert Hogan and Robert B. Kaiser, "What We Know About Leadership," Review of General Psychology 9, no. 2 (2005): 169-180.
  10. Robert B. Kaiser, Jarrett M. LeBreton, and Joyce Hogan, "The Dark Side of Personality and Extreme Leader Behavior," Applied Psychology 64, no. 1 (2015): 55-92.
  11. Irving L. Janis, Victims of Groupthink (Houghton Mifflin, 1972).
  12. Amy C. Edmondson, "Psychological Safety and Learning Behavior in Work Teams," Administrative Science Quarterly 44, no. 2 (1999): 350-383.
  13. Sidney Dekker, The Field Guide to Understanding Human Error, 3rd ed. (Ashgate, 2014).
  14. Ioan P Couliano, Eros and Magic in the Renaissance