From OODA to D-U-A

Re-Engineering Command Topology for Decision Dominance in the Algorithmic AgeThe Mechanised Evolution of Boyd’s Cognitive Model for the AI-Enabled Battlespace

1. Introduction: The Evolution of Adaptation and Uncertainty

Colonel John Boyd’s Observe-Orient-Decide-Act (OODA) loop remains the essential foundation for understanding competitive strategy and cognitive adaptation in conflict.1 Boyd’s strategic thought is fundamentally dedicated to operating under duress, acknowledging that every step of the decision cycle is tainted by observer bias, incompleteness, and environmental chaos.3 His framework, built on a synthesis of scientific principles, is predicated on the imperative that the observer will never have a comprehensive or neutral view of reality.4

Boyd’s genius lay in coupling cognition with action; our era’s challenge is coupling machine cognition with human intent. While Boyd's philosophy of managing intrinsic uncertainty remains sound, the operational structure of the original OODA loop-designed for human-bounded perception-is now structurally challenged by the sheer volume of data and the scale of modern, decentralized conflict.3 The sequence must invert to maintain strategic tempo.

The goal of achieving strategic tempo has shifted from merely accelerating reaction speed to ensuring decision coherence across geographically distributed actors.5 We propose that the Decision-Driven Intelligence, Surveillance, Reconnaissance, and Targeting (ISR-T) model, identified as IDDI, represents the mechanized evolution of OODA.

Thesis: The OODA loop’s cognitive steps persist, but their sequence and operational execution must invert under automation. Decision-Driven ISR-T transforms the OODA loop into a concurrent, intent-driven topology: Decide–Understand–Act (D-U-A).

2. The Functional Transformation of the OODA Loop

The D-U-A model is the operational realization of Boyd's principle of swift adaptation at algorithmic speed. We accelerate the process by functionally collapsing the initial sequential phases.

2.1 The Transformation of the Observe Phase

Boyd correctly recognized that observation is inherently non-neutral, limited by pre-existing mental schemata, biases, and the fact that information may be manipulated or asynchronous.6 The challenge today is not the quality of observation, but the speed required to cope with data volume. The sequential nature of waiting for generalized awareness-the traditional Observe phase-is now a strategic liability.2

The Decision-Driven topology transforms observation from passive reception to active, decision-conditioned interrogation.2 Sensors are no longer bystanders; they become decision agents executing intent.2 This is achieved through Edge AI, where onboard processors use algorithms like Automated Target Recognition (ATR) to classify threats and identify targets before the data ever leaves the collection node, immediately transforming raw data into initial synthesis.7 The Observe phase thus changes its nature, focusing ruthlessly on the information critical for an impending decision point, thereby accelerating the entire loop.

2.2 The Mechanization of Orientation

Orientation is Boyd’s intellectual center of gravity-the corrective phase dedicated to creative destruction of outdated mental models and synthesizing raw, flawed data into a viable basis for action.9 This phase, encompassing cultural traditions, previous experience, and rigorous analysis, is vital for managing intrinsic uncertainty.6

While intellectually vital, this human-intensive rigor is now the primary tempo bottleneck in machine-speed conflict, slowing the crucial cognitive phase.10 The IDDI model delegates the initial, high-volume synthesis required for orientation to the algorithmic enterprise. Tailored Understanding (the U in D-U-A) is the technological realization of Orientation, executed at algorithmic tempo.11 This understanding is system-level synthesis enabled by Artificial Intelligence (AI) and Machine Learning (ML), which is then delivered as actionable intelligence, pre-filtered by intent, for validation by human judgment.7

2.3 Boyd's Philosophical Imperative: Managing Intrinsic Uncertainty

The IDDI/D-U-A model must adhere to the foundational philosophical principles Boyd used to define the OODA loop's necessity:3

  • Gödel’s Incompleteness Theorem: IDDI must accept that any logical model of reality is fundamentally incomplete, refuting the pursuit of certainty and requiring continuous refinement of mental models-a perpetual process of self-correction.12

  • Heisenberg’s Uncertainty Principle: IDDI must abandon the pursuit of perfect, comprehensive information, as the very act of Observation and Action changes the reality being observed, confirming that the strategic universe is constantly changing.12

  • The Second Law of Thermodynamics (Entropy): This principle confirms that the universe continuously moves toward disorder, driving the need for high tempo and continuous adaptation to counter environmental dissipation and maintain viability.4

The IDDI/D-U-A framework, by actively managing this systemic ambiguity, structures its success metrics around viability (rapid adaptation) rather than the impossible pursuit of raw certainty.3  Inverting OODA is not a rejection of Boyd’s logic but a continuation of his insight: uncertainty cannot be eliminated, only managed through tempo and coherence. This sets the foundation for a new command paradigm-Problem-Centric Command.

2.4 Functional Decoupling: Modes in a Coupled Field

The D-U-A topology is the necessary topological solution where the Observe and Orient phases are functionally collapsed and delegated. The phases—Decide, Understand, and Act—are no longer sequential stages but concurrent modes operating in a coupled field of intent and effect. This transition enables three life cycles: Pre-hoc Formation (defining intent and risk), Ad-hoc Adaptation (dynamic execution), and Post-hoc Reflection (learning and validating outcomes).1

The IDDI/D-U-A framework, by actively managing systemic ambiguity, structures its success metrics around viability (rapid adaptation) rather than the impossible pursuit of raw certainty.1

3. From Information Requirements to Problem-Centric Command

3.1 The End of Request-Based Intelligence

The Information Requirements Management (IRM) process and its derivative-Commander’s Critical Information Requirements (CCIRs)-were designed for linear, hierarchical command systems in which information moved upward through deliberate channels.

In practice, this model dislocates the decision-maker from both the analyst and the battlespace.
The commander asks; staff interpret; analysts collect; and by the time information arrives, the opportunity for decisive action has already passed.

In a distributed, algorithmic environment, the act of “requesting information” itself becomes a structural bottleneck. The IRM cycle assumes that the commander knows in advance what must be asked and that the environment will remain stable long enough for the answer to retain relevance. Neither assumption holds in modern operations.  Therefore, IDDI replaces IRM and CCIR with a Problem-Centric Activity (PCA) framework-one that keeps the decision-maker, analyst, and environment dynamically connected through shared understanding rather than sequential reporting.

3.2 The Problem-Centric Activity Framework

The PCA framework is built on four interlocking steps:

  1. Problem Definition and Allocation.  Every operation begins with a defined problem rather than an information requirement. The problem is allocated to a command or commander as a bounded domain of responsibility-complete with spatial, temporal, and ethical parameters. This ensures ownership of understanding rather than ownership of data.

  2. Decision-Point Identification.  Within each problem, critical decision points are mapped in advance. These decision points define when and what kind of understanding must exist to enable timely, proportionate action. They replace PIRs and FFIRs as the focal node for ISR-T tasking.

  3. Understanding Levels.  For each decision point, required levels of understanding are defined according to two variables: decision risk and time.

    1. High-risk decisions (e.g., kinetic effects or escalation control) demand higher confidence levels and richer fusion.

    2. Time-critical decisions may accept lower confidence thresholds in exchange for tempo.  This quantification of risk and time formalises Boyd’s notion of acceptable uncertainty within an adaptive system.

  4. Action Triggers


Once the required level of understanding is achieved-or when time expires-action is authorised.  This replaces the serial CCIR response with a dynamic threshold model: the system acts when the commander’s predefined risk/time parameters converge.

3.3 The Reconnected Command Chain

By orienting ISR-T directly around problems and decision points, the PCA model dissolves the artificial separation between the commander, the analyst, and the environment.

Understanding is generated and visualised within the same mesh that drives action; feedback is instantaneous; and responsibility for cognition is shared across human and machine nodes.

This structure achieves three outcomes:

  • Unity of Cognition: Analysts and commanders operate in the same problem space, not on separate task lists.

  • Temporal Precision: Understanding is delivered at the exact moment of decision relevance.

  • Moral Agency: Commanders retain ethical ownership of decisions because the system serves their intent, not their requests.

3.4 Implications for Doctrine

Replacing CCIRs with Problem-Centric Activity does not discard the principles of Mission Command-it fulfils them.  Mission Command demands decentralised initiative guided by intent; PCA provides that philosophy with a technical and procedural architecture.

It ensures that every collection act, every analytic cycle, and every autonomous process is anchored to a problem that matters, a decision that is known, and a risk that is understood.

In this sense, PCA represents not a rejection of doctrinal heritage but the next Boydian evolution:  From information requirements to understanding requirements - from observation to problem ownership - from command as request to command as cognition. If Problem-Centric Command defines the logic of decision-driven cognition, the D-U-A enterprise architecture provides its physical execution through automation, integration, and edge-based understanding.

4. The Decision-Driven ISR-T Enterprise Architecture

The D-U-A model does not add complexity-it removes latency. By merging sensing and sense-making at the edge, the architecture transforms ISR from a spectator function into an instrument of command.7

4.1 Automation and Edge Cognition

The D-U-A model leverages Edge AI and onboard processing to collapse the Observe and Orient stages, enabling the C2 process to move toward "lightning-fast decision-making".7 Automated Mission Planning and dynamic re-tasking based on emerging priority areas actively enforce the decision requirement loop.19 The integrated ISR-T systems deliver an intelligence edge, ensuring timely and precise intelligence defines mission success in the connected battlespace.20

4.2 Strategic Applicability and Constraints

The D-U-A topology excels in tactical and operational scenarios characterized by high information flow, rapidly changing circumstances, and short decision timelines, where the goal is maximizing speed of action.5

However, the model’s utility must be carefully managed at the strategic level, where ambiguity is high and human expertise is critical for novel, non-linear threats. In this context, the D-U-A framework should primarily serve to inform strategic orientation and policy-level decisions, leveraging AI to augment human judgment, not replace it.22

5. Compression and Coherence: OODA’s Successor

5.1 Compression of the Decide–Act Chain

Tempo dominance is not a function of faster observation, but of anticipatory understanding pre-aligned to decision.2 Since the Observe and Orient phases are functionally collapsed, compression occurs directly between Decision and Action.24

The function of IDDI is to ensure the zero latency required to translate decisions into action.2 By pre-conditioning understanding to the commander’s intent, IDDI ensures that the fidelity of the decision, the speed of response, and the effectiveness of the action are all maximized.5

5.2 Cognitive Resilience: The Dual-Stream Model

The transition to decision-centric tasking heightens the risk of Confirmation Bias (unconsciously steering analysis to reinforce the initial decision) and Tunnel Vision (missing critical anomalies outside the CCIRs).25

To maintain adaptability and situational awareness, IDDI must employ a dual-stream cognitive architecture:

  1. Decision-Driven Focus: The aggressive D-U-A cycle optimized for immediate tactical action.

  2. Autonomous Anomaly Detection: Parallel curiosity engines-AI streams dedicated to continuous, generalized pattern recognition and anomaly detection, independent of the current mission CCIRs.26

This dual-stream model-decision-driven focus and autonomous anomaly detection-ensures that tempo is achieved without sacrificing coherence or moral clarity.

5.3 Cognitive Curvature: Maintaining Coherence at Tempo

The transition to decision-centric tasking heightens the risk of Confirmation Bias and Tunnel Vision.18 What matters now isn’t faster reaction but the ability to maintain curvature—coherence across the loop—so meaning doesn’t shear apart at machine tempo.

To maintain adaptability and situational awareness, IDDI must employ a dual-stream cognitive architecture 1:

  • Decision-Driven Focus (D-U-A): Optimized for immediate tactical action (Ad-hoc Adaptation).

  • Autonomous Anomaly Detection: Parallel curiosity engines dedicated to continuous, generalized pattern recognition, ensuring Post-hoc Reflection and learning.20

This commitment to Curvature ensures that tempo is achieved without sacrificing the necessary coherence or moral clarity, fulfilling the necessary philosophical requirement for continuous adaptation.

6. Human Factors and Ethical Boundaries

The IDDI model is built for cognitive augmentation, not automation replacement.27 Strategic advantage lies with the most effective human-machine team, not merely the newest algorithm.23

Human-on-the-loop oversight remains mandatory for strategic decisions and judgments regarding proportionality, risk tolerance, and ethical boundaries, where the evaluation of harm and probability is critical.29 This arrangement strategically exploits human mental strengths-creative Orientation, ethical judgment, and bias mitigation-while leveraging machine strengths in efficiency and rapid tactical detection.3

7. Conclusion: The Realization of Boyd’s Unity

The OODA loop remains the essential theory of adaptation. However, the Decision-Driven ISR-T (IDDI) model provides the necessary operational grammar to execute this theory at speed.

By shifting from passive to active interrogation and delegating sequential orientation to machine-speed synthesis, the D-U-A topology actively maintains the crucial link between the observer and the observed, restoring the unity Boyd sought.2

IDDI is the necessary step in the evolution of command. It is not an attack on Boyd’s legacy, but its fulfillment-the mechanized successor designed for a world of intrinsic uncertainty and algorithmic speed. The future battlespace will not reward those who observe fastest, but those who understand first and act with certainty.

Works cited

  1. John Boyd and The OODA Loop - Psych Safety, accessed on October 24, 2025, https://psychsafety.com/john-boyd-and-the-ooda-loop/

  2. OODA AND IDDI

  3. Refining OODA Loop Understanding

  4. The Strategic Theory of John Boyd - Tasshin, accessed on October 27, 2025, https://tasshin.com/blog/the-strategic-theory-of-john-boyd/

  5. MISSION COMMAND - Air Force Doctrine, accessed on October 24, 2025, https://www.doctrine.af.mil/Portals/61/documents/AFDP_1-1/AFDP%201-1%20Mission%20Command.pdf

  6. Observe, Orient, Decide, and Act (The OODA Loop) - D. Brown Management, accessed on October 24, 2025, https://dbmteam.com/insights/observe-orient-decide-and-act-the-ooda-loop/

  7. Persistent ISR at the Tactical Edge - You Need More Than AI to Counter Today's Threats, accessed on October 24, 2025, https://clearalign.com/knowledge-center/id/24/persistent-isr-at-the-tactical-edge--you-need-more-than-ai-to-counter-todays-threats

  8. Edge ISR: Revolutionizing Intelligence in Contested Environments, accessed on October 24, 2025, https://othjournal.com/2025/04/25/edge-isr-revolutionizing-intelligence-in-contested-environments/

  9. Mastering the OODA Loop: A Comprehensive Guide to Decision-Making in Business, accessed on October 24, 2025, https://corporatefinanceinstitute.com/resources/management/ooda-loop/

  10. Colonel John Boyds Thoughts on Disruption - Marine Corps University, accessed on October 27, 2025, https://www.usmcu.edu/Outreach/Marine-Corps-University-Press/MCU-Journal/JAMS-vol-14-no-1/Colonel-John-Boyds-Thoughts-on-Disruption/

  11. What is Military Intelligence and Why It Matters - SpecialEurasia, accessed on October 24, 2025, https://www.specialeurasia.com/2025/10/21/military-intelligence/

  12. Notes on The Tao of Boyd Article - Daniel Fitzgerald's Blog, accessed on October 27, 2025, https://danielfitzgerald.dev/index.php/2021/03/17/notes-on-the-tao-of-boyd-article/

  13. Insights and Best Practices Focus Paper: Mission Command - Joint Chiefs of Staff, accessed on October 24, 2025, https://www.jcs.mil/Portals/36/Documents/Doctrine/fp/missioncommand_fp_2nd_ed.pdf

  14. (PDF) Introducing intents to the OODA-loop - ResearchGate, accessed on October 24, 2025, https://www.researchgate.net/publication/336546987_Introducing_intents_to_the_OODA-loop

  15. OODA Point: The Requirement for an Airman's Approach to Operational Design (Part II), accessed on October 24, 2025, https://othjournal.com/2019/08/20/ooda-point-the-requirement-for-an-airmans-approach-to-operational-design-part-ii/

  16. Initial Commander's Critical Information Requirements and the 5 Common Command Decisions - Fort Benning, accessed on October 24, 2025, https://www.benning.army.mil/armor/eARMOR/content/issues/2017/Fall/4Feltey-Mattingly17.pdf

  17. Data-Informed vs. Data-Driven vs. Decision-Driven: Why Executives Must Understand the Difference, accessed on October 24, 2025, https://decisionsciences.blog/2025/02/24/data-informed-vs-data-driven-vs-decision-driven-why-executives-must-understand-the-difference/

  18. Lightning-fast decision-making: How AI can boost OODA loop impact on cybersecurity, accessed on October 24, 2025, https://cloud.google.com/transform/lightning-fast-decision-making-how-ai-can-boost-ooda-loop-impact-on-cybersecurity

  19. AI Impact Analysis on Airborne ISR Industry - MarketsandMarkets, accessed on October 24, 2025, https://www.marketsandmarkets.com/ResearchInsight/ai-impact-analysis-on-airborne-isr-industry.asp

  20. ISR-T | Defense.flir.com, accessed on October 24, 2025, https://defense.flir.com/isrt/overview/

  21. ISR and SIGINT | L3Harris® Fast. Forward., accessed on October 24, 2025, https://www.l3harris.com/all-capabilities/isr-and-sigint

  22. Full article: Automating the OODA loop in the age of intelligent machines: reaffirming the role of humans in command-and-control decision-making in the digital age, accessed on October 27, 2025, https://www.tandfonline.com/doi/full/10.1080/14702436.2022.2102486

  23. Automating the OODA Loop in the Age of AI - Nuclear Network - CSIS, accessed on October 27, 2025, https://nuclearnetwork.csis.org/automating-the-ooda-loop-in-the-age-of-ai/

  24. Setting expectations is more important than the work plan - RSM US, accessed on October 24, 2025, https://rsmus.com/insights/services/business-strategy-operations/setting-expectations-is-more-important-than-the-work-plan.html

  25. Escaping Tunnel Vision: Break Decision-Making Bias - ATR Soft Noni, accessed on October 24, 2025, https://www.atrsoft.com/noni/blog/escaping-tunnel-vision-break-decision-making-bias/

  26. Cyber Risks Associated with Generative Artificial Intelligence, accessed on October 24, 2025, https://www.mas.gov.sg/-/media/mas-media-library/regulation/circulars/trpd/cyber-risks-associated-with-generative-artificial-intelligence.pdf

  27. Leveraging Human– Machine Teaming | SCSP, accessed on October 27, 2025, https://www.scsp.ai/wp-content/uploads/2024/01/human-machine-teaming-sr-jan-2024.pdf

  28. JCN 1/18, Human-Machine Teaming - GOV.UK, accessed on October 27, 2025, https://assets.publishing.service.gov.uk/media/5b02f398e5274a0d7fa9a7c0/20180517-concepts_uk_human_machine_teaming_jcn_1_18.pdf

Artificial Intelligence: DHS Needs to Improve Risk Assessment Guidance for Critical Infrastructure Sectors - GAO, accessed on October 24, 2025, https://www.gao.gov/products/gao-25-107435

Next
Next

D is for Driver: Why the Decision-Maker Must Be First in the Algorithmic Battlespace