Halaman

Lv 6. Epistemic Policy Cycle Framework: Integrating Foresight, PDG, and Ex-Post Evaluation for Robust Policy-Making

EPCF · Epistemic Policy Cycle Framework

EPISTEMIC POLICY CYCLE FRAMEWORK (EPCF)

Integrating Foresight, Pre-Decision Governance, and Ex-Post Evaluation for Robust Policy-Making

A Conceptual Framework for Global Policy Labs


Final Version – March 2026
Proponent: Abu Abdurrahman, M.H. (Accountability-Based Universal Wisdom and Trust)
Correspondence: tpapgtk@gmail.com
License: CC BY-NC-SA 4.0


TABLE OF CONTENTS

  • Executive Summary
  • 1. Introduction: Problem and Urgency
  • 2. Position within the CAA-PDG Architecture
  • 3. EPCF Conceptual Framework
  • 4. The Three Stages of EPCF: Definitions and Functions
  • 5. Measurement Instruments: IPDG, DQI, and PDRR
  • 6. Propositions for Empirical Testing
  • 7. Institutional Prerequisites and Limitations
  • 8. Research Agenda and Pilot Design
  • 9. Conclusion: Contribution to Global Governance
  • Bibliography
  • Appendix: Concise Measurement Protocols

EXECUTIVE SUMMARY

The Epistemic Policy Cycle Framework (EPCF) is an integrative framework that connects three key functions within the public policy cycle:

  1. Foresight – systematic exploration of the future to identify uncertainties and develop strategic scenarios.
  2. Pre-Decision Governance (PDG) – testing the quality of reasoning before strategic decisions are made, through four pillars: assumption testing, counter-framing, multi-option mandate, and structured dissent mechanisms.
  3. Ex-Post Evaluation – measuring policy outcomes and identifying epistemic failures as feedback for institutional learning.

Operational Definition:
Epistemic failure refers to policy failure primarily attributable to flawed assumptions, misframing of problems, or untested information rather than implementation constraints or external shocks.

The main contribution of EPCF lies not in the novelty of its individual components—as these three functions are already recognized in global policy literature and practice—but in the systematic epistemic integration between them. EPCF creates an epistemic loop ensuring that:

  • Assumptions generated from foresight are explicitly tested during the decision-making process (PDG);
  • Epistemic failures (due to flawed assumptions, biased framing, or untested information) are documented and analyzed during evaluation;
  • Evaluation results serve as feedback to improve foresight and PDG processes in subsequent cycles.

Novelty Statement:
EPCF does not introduce new stages to the policy cycle. Its novelty lies in establishing a structured epistemic linkage across policy stages—foresight, decision formation, and evaluation—ensuring that strategic assumptions are explicitly generated, tested, and retrospectively assessed.

Important Note: EPCF can be applied independently as an operational framework even without adopting the broader CAA-PDG architecture. This flexibility allows organizations to start with EPCF's key elements according to their needs and capacities.

This framework is designed for testing by global institutions such as the OECD, UNDP, World Bank, as well as international policy labs and think tanks working in the field of policy governance.

1. INTRODUCTION: PROBLEM AND URGENCY

1.1 Three Disconnected Functions

In global public policy practice, three important functions—foresight, decision-making, and evaluation—generally operate in separate silos:

FunctionReference InstitutionsTypical OutputWeakness
ForesightOECD Strategic Foresight Unit, UNDP Global Centre for Public Service ExcellenceScenario reports, early warningsOften disconnected from formal decision-making processes
Decision-MakingRegulatory Impact Assessment (RIA), policy analysisPolicy recommendations, feasibility documentsQuality of reasoning (assumptions, framing, dissent) is rarely audited systematically
EvaluationOECD/DAC Evaluation Network, UNEG, World Bank IEGImpact evaluation reportsFocuses on outcomes, rarely analyzes the epistemic roots of failure

1.2 Why Now?

Over the past decade, the world has faced an unprecedented increase in complexity and uncertainty. The COVID-19 pandemic, accelerating climate change, technological disruption (especially artificial intelligence), and geopolitical tensions have shown that policies designed with stable, linear assumptions are prone to failure. This failure is often not caused by poor implementation, but by epistemic failure—untested assumptions, narrow problem framing, or unverified information at the planning stage.

Global institutions like the OECD, World Bank, and UNDP increasingly emphasize the importance of anticipatory governance and policy learning to address systemic risks. However, there is still no framework that explicitly links future exploration (foresight) with assumption testing in decision-making and evaluative learning. EPCF aims to fill this gap, offering a concrete mechanism to build policy resilience in an era of uncertainty.

1.3 Position within Policy Cycle and Policy Learning Literature

Classic literature on the policy cycle has long identified the stages in the policy process, from agenda-setting to evaluation (Lasswell, 1956; Anderson, 1975; Parsons, 1995). These models provide an important foundation for understanding policy as a staged process. However, this literature generally focuses on describing stages, not on mechanisms ensuring epistemic quality within them.

Meanwhile, the policy learning literature (Hall, 1993; Bennett & Howlett, 1992) emphasizes how organizations can adapt policies based on past experience. EPCF complements this literature by providing a concrete institutional mechanism to ensure that learning genuinely occurs—not only from policy outcomes, but also from the reasoning processes that preceded them.

Table 1. Comparison with Other Frameworks

FrameworkPrimary FocusEPCF's Contribution
Policy Cycle (Lasswell, Anderson, Parsons)Stages of policy process (agenda-setting, formulation, implementation, evaluation)Adds an epistemic layer linking stages cognitively
Policy Learning (Hall, Bennett & Howlett)Policy adaptation based on experience and new knowledgeProvides institutional mechanism for structured learning across cycles
EPCFEpistemic linkage across stages (foresight → PDG → evaluation)Integrates the three functions into one measurable epistemic loop

2. POSITION WITHIN THE CAA-PDG ARCHITECTURE

EPCF is built upon the theoretical foundations developed within the Cognitive Accountability Architecture (CAA) and Pre-Decision Governance (PDG) frameworks.1 However, EPCF can be applied independently as an operational framework even without adopting the broader CAA-PDG architecture. This flexibility allows organizations to start with EPCF's key elements according to their needs and capacities.

┌─────────────────────────────────────────────────────────────┐ │ LEVEL 1: META-THEORY │ │ Cognitive Accountability Architecture (CAA) │ │ "Why is cognitive accountability necessary?" │ │ (Epistemic transparency, assumption testing, dissent, │ │ cognitive learning) │ └─────────────────────────────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────┐ │ LEVEL 2: CORE THEORY │ │ Pre-Decision Governance (PDG) │ │ "How is cognitive accountability institutionalized?" │ │ (Four pillars: Framing, Option Architecture, │ │ Information Filtering, Deliberative Structure) │ └─────────────────────────────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────┐ │ LEVEL 3-5: DERIVED THEORIES & INSTRUMENTS │ │ • Mechanism theories (ATT, CFT, MOMT, SDT, SCPT, ERMT) │ │ • AI governance theories (AIDAT, ABMGT, AITEMT, HILDST) │ │ • Measurement instruments (IPDG, DQI, ETD, PDRR) │ └─────────────────────────────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────┐ │ LEVEL 6: CONTEXTUAL ADAPTATION FRAMEWORK │ │ ┌───────────────────────────────────────────────────────┐ │ │ │ EPISTEMIC POLICY CYCLE FRAMEWORK (EPCF) │ │ │ │ (Integration of foresight + PDG + evaluation │ │ │ │ in the public policy cycle) │ │ │ └───────────────────────────────────────────────────────┘ │ └─────────────────────────────────────────────────────────────┘

Position Explanation:

  • EPCF is situated at Level 6: Contextual Adaptation Framework—a layer that adapts PDG to the specific context of the public policy cycle.
  • It is not a new theory, but rather an integrative framework demonstrating how PDG interacts with the established domains of foresight and evaluation.
  • Thus, EPCF remains consistent with the ABUWT hierarchy while extending PDG's applicability to the extreme upstream (foresight) and downstream (in-depth evaluation).

3. EPCF CONCEPTUAL FRAMEWORK

3.1 The Epistemic Loop Model

EPCF is designed as a closed-loop system ensuring epistemic feedback between stages:

┌───────────┐ │ FORESIGHT │ └─────┬─────┘ │ ▼ ┌─────────────┐ │ PDG │ │(Assumption │ │ testing) │ └──────┬──────┘ │ ▼ ┌─────────────┐ │ DECISION │ └──────┬──────┘ │ ▼ ┌─────────────┐ │ EVALUATION │ └──────┬──────┘ │ ▼ ┌─────────────┐ │ EPISTEMIC │ │ FEEDBACK │ └─────────────┘ │ ┌────────────┴────────────┐ │ │ ▼ ▼ ┌───────────┐ ┌───────────┐ │ Improve │ │ Improve │ │ Foresight │ │ PDG │ └───────────┘ └───────────┘

3.2 Detailed Stages

┌─────────────────────────────────────────────────────────────────┐ │ │ │ STAGE 1: EXTREME UPSTREAM (Foresight) │ │ ┌─────────────────────────────────────────────────────────┐ │ │ │ Objective: Systematically explore future possibilities, │ │ │ │ identify systemic uncertainties, │ │ │ │ develop strategic scenarios. │ │ │ │ MANDATORY OUTPUT: List of explicit strategic assumptions │ │ │ │ to be tested in Stage 2. │ │ │ └─────────────────────────────────────────────────────────┘ │ │ ↓ │ │ STAGE 2: MIDDLE UPSTREAM (Pre-Decision Governance) │ │ ┌─────────────────────────────────────────────────────────┐ │ │ │ Objective: Ensure the reasoning behind strategic │ │ │ │ decisions is systematically tested before │ │ │ │ resource commitment. │ │ │ │ Method: The four pillars of PDG: │ │ │ │ 1. Framing Governance │ │ │ │ 2. Option Architecture │ │ │ │ 3. Information Filtering (testing assumptions from Stage 1)│ │ │ 4. Deliberative Structure │ │ │ │ Output: IPDG score, documentation of tested assumptions, │ │ │ │ documented dissent. │ │ │ └─────────────────────────────────────────────────────────┘ │ │ ↓ │ │ ┌───────────────┐ │ │ │ DECISION │ │ │ └───────────────┘ │ │ ↓ │ │ STAGE 3: DOWNSTREAM (Ex-Post Evaluation & Learning) │ │ ┌─────────────────────────────────────────────────────────┐ │ │ │ Objective: Measure policy outcomes, identify epistemic │ │ │ │ failures, provide feedback for future │ │ │ │ policy cycles. │ │ │ │ Evaluation focus: │ │ │ │ a) Policy outcomes (DQI) │ │ │ │ b) Reversal rate due to analytical flaws (PDRR) │ │ │ │ c) Quality of assumptions tested in Stage 2 │ │ │ │ Output: DQI score, epistemic failure analysis, │ │ │ │ recommendations for improvement. │ │ │ └─────────────────────────────────────────────────────────┘ │ │ ↓ │ │ ┌─────────────────────────────────────────────────────────┐ │ │ │ EPISTEMIC FEEDBACK: Evaluation results used to improve: │ │ │ │ • Quality of foresight (more relevant assumptions) │ │ │ │ • Quality of PDG (stricter testing processes) │ │ │ │ • Policy design in subsequent cycles │ │ │ └─────────────────────────────────────────────────────────┘ │ │ │ └─────────────────────────────────────────────────────────────────┘

3.3 Illustrative Case: Post-Pandemic Renewable Energy Policy

To clarify how EPCF works in practice, here is a hypothetical illustration of its application to a renewable energy policy in a country.

Context: Following the COVID-19 pandemic, a country aims to accelerate its renewable energy transition to foster a green economic recovery. The Ministry of Energy drafts a policy targeting 30% renewable energy by 2030.

Stage 1: Foresight
An inter-ministerial foresight team (Energy, Finance, Industry) conducts horizon scanning and develops three main scenarios:

  • Scenario A (Optimistic): Robust global investment flows, cheap solar technology, high carbon prices.
  • Scenario B (Moderate): Slower investment flow, stable technology, moderate carbon prices.
  • Scenario C (Pessimistic): Prolonged economic crisis, increased protectionism, stagnant technology.

From these scenarios, the foresight team generates a list of explicit strategic assumptions, for example:

  • Assumption 1: International interest rates will remain low for the next 5 years (relevant for project financing).
  • Assumption 2: The global solar panel supply chain will not face significant disruptions.
  • Assumption 3: The carbon price will reach $50/ton by 2028.

Stage 2: Pre-Decision Governance (PDG)
The Ministry of Energy brings the policy draft and the list of assumptions into an Executive Challenge Session involving the Ministry of Finance, the National Development Planning Agency, and academics. Using the four pillars of PDG:

  • Framing Governance: Is the problem solely "energy transition," or does it also encompass "social justice for fossil fuel workers"?
  • Option Architecture: Three policy options are explored: (1) direct subsidies for solar projects, (2) tax incentives, (3) a feed-in tariff scheme.
  • Information Filtering: Assumptions from foresight are tested. The Ministry of Finance questions Assumption 1 (low interest rates) with recent central bank data indicating a potential for faster rate hikes.
  • Deliberative Structure: An academic is appointed as an "official challenger" and documents their objection to Assumption 2 (supply chain) by referring to recent geopolitical tensions.

As a result, the policy is revised: the 30% target is maintained, but the financing scheme is designed more flexibly to anticipate interest rate increases, and an option for diversifying solar panel imports is added. The IPDG score for this process is recorded as 14/16 (categorized as "Good").

Stage 3: Ex-Post Evaluation & Epistemic Feedback
Three years later, the policy is evaluated. Renewable energy realization has only reached 18%. Analysis shows that Assumption 1 (low interest rates) proved incorrect as interest rates rose faster, causing some private projects to be canceled. However, due to the flexibility designed in Stage 2, the government was able to shift funding towards tax incentive schemes less sensitive to interest rates. The DQI score is 3.8/5 (categorized as "Fairly Good").

More importantly, the PDRR is calculated: out of 10 strategic decisions in the energy sector during this period, only 1 was completely canceled due to analytical flaws (not due to external factors like the pandemic). PDRR = 10%. The results of this evaluation become feedback: the foresight process is improved by incorporating more varied interest rate scenarios, and PDG is strengthened by involving financial experts earlier.

4. THE THREE STAGES OF EPCF: DEFINITIONS AND FUNCTIONS

4.1 Stage 1: Extreme Upstream – Epistemic Foresight

Objective: Systematically explore future possibilities and generate explicit strategic assumptions to be tested during the decision-making process.

Globally Recognized Methods:

  • Horizon scanning (UN Global Pulse, OECD)
  • Scenario planning (Shell, RAND Corporation)
  • Backcasting (The Natural Step)
  • Delphi method (RAND)

Mandatory Outputs:

  • Future scenarios (minimum of 3 scenarios with distinct logics).
  • A list of explicit strategic assumptions underlying each scenario.
  • Identification of critical uncertainties requiring continuous monitoring.

Global Reference Institutions:

  • OECD Strategic Foresight Unit
  • UNDP Global Centre for Public Service Excellence
  • World Bank Foresight Group
  • EU Policy Lab – Foresight
  • IPBES (Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services)
  • IPCC (Intergovernmental Panel on Climate Change)

4.2 Stage 2: Middle Upstream – Pre-Decision Governance (PDG)

Objective: Ensure that the reasoning behind strategic decisions is systematically tested before resource commitments are made.

The Four Pillars of PDG (with connection to Stage 1):

PillarFunctionConnection to Foresight
Framing GovernanceTests problem definition: is the problem framed correctly?Ensures policy framing is consistent with foresight scenarios.
Option ArchitectureExplores at least three distinct policy alternatives equally.Alternatives must include responses to various foresight scenarios.
Information FilteringTests assumptions and data validity.MANDATORY to test assumptions generated from Stage 1.
Deliberative StructureCreates a safe space for structured dissent.Dissent can target assumptions derived from foresight.

Mandatory Outputs:

  • IPDG (Pre-Decision Governance Index) score for each strategic decision.
  • Documentation of assumptions tested, including those from Stage 1.
  • Documentation of substantive dissent and responses to it.
  • Final decision recommendation with epistemic justification.

Global Reference Institutions:

  • OECD Regulatory Policy Committee
  • World Bank Global Governance Practice
  • INTOSAI (International Organization of Supreme Audit Institutions)
  • IIA (Institute of Internal Auditors)
  • National audit offices (NAO UK, GAO US, BPK Indonesia)

4.3 Stage 3: Downstream – Epistemic Evaluation & Learning

Objective: Measure policy outcomes, identify epistemic failures, and provide feedback for subsequent cycles.

Two Evaluation Dimensions:

DimensionFocusInstrument
Conventional EvaluationOutcomes, efficiency, impactOECD/DAC criteria, cost-benefit analysis, impact evaluation
Epistemic EvaluationQuality of assumptions, framing accuracy, effectiveness of dissent mechanismsDQI (Decision Quality Index), epistemic failure analysis

Mandatory Outputs:

  • DQI (Decision Quality Index) score for the evaluated decision.
  • PDRR (Post-Decision Reversal Rate) – percentage of strategic decisions canceled or fundamentally revised due to analytical flaws.
  • Epistemic failure analysis: was failure caused by flawed assumptions, biased framing, untested information, or ignored dissent?
  • Recommendations for improving foresight and PDG processes in subsequent cycles.

Global Reference Institutions:

  • OECD/DAC Evaluation Network
  • UNEG (United Nations Evaluation Group)
  • World Bank Independent Evaluation Group (IEG)
  • CLEAR (Centers for Learning on Evaluation and Results)
  • National evaluation offices (MEAL, M&E units)

5. MEASUREMENT INSTRUMENTS: IPDG, DQI, AND PDRR

5.1 Pre-Decision Governance Index (IPDG)

The IPDG measures the quality of the pre-decision process across four dimensions using a 0–2 scale per indicator.

DimensionIndicatorScale 0Scale 1Scale 2
Framing GovernanceRoot cause analysisNoneSimple analysisSystematic analysis (fishbone, 5 whys, data)
Stakeholder participationNoneInformalFormal and documented
Option ArchitectureExploration of alternativesSingle optionTwo optionsThree or more options
Cost-benefit/risk analysisNoneFor some optionsFor all options
Information FilteringVerification of key dataUnverifiedInternal verificationIndependent verification
Testing critical assumptions (including from foresight)Not identifiedIdentified, not testedTested and documented
Deliberative StructureFormal challenger mechanismNoneInformalFormal assignment
Dissent documentationNot recordedRecorded, no follow-upRecorded and followed up

Total IPDG Score: (Sum of scores / Maximum possible score) × 100%

5.2 Decision Quality Index (DQI)

The DQI measures the quality of decision outcomes after implementation (minimum 6–12 months).

DimensionIndicatorsData Sources
Implementation SuccessBudget realization, timelinessFinancial reports, progress reports
Goal AchievementAchievement of performance indicatorsEvaluation reports, statistical data
Resource EfficiencyOutput/input ratio, cost overrunsAudit reports
Long-term ImpactSustained effects after intervention endsImpact studies, follow-up evaluations
Stakeholder SatisfactionPerceptions of target groupsSurveys, interviews

DQI Score: Weighted average of scores for each dimension (scale 1–5).

5.3 Post-Decision Reversal Rate (PDRR)

The PDRR measures the rate of cancellation or fundamental revision of strategic decisions due to analytical flaws (not due to external factors).

PDRR = (Number of reversals due to analytical flaws / Total strategic decisions) × 100%

Operational Definitions:

  • Reversal: a decision is canceled, postponed indefinitely, or fundamentally revised (changing its core objectives, budget, or mechanisms).
  • Analytical flaw: flawed assumptions, biased framing, invalid information, ignored dissent (evidenced by IPDG documentation).
  • External factors: national policy changes, economic crises, natural disasters, government transitions—excluded.

Concrete Example: An infrastructure investment policy is canceled after two years because the cost projections used during planning were based on unrealistic material price assumptions (e.g., ignoring potential price increases due to inflation or supply chain disruptions). If these assumptions were never tested during the pre-decision process and subsequently became the primary cause of cost overruns forcing project termination, this is recorded as a reversal due to analytical flaw in the PDRR.

6. PROPOSITIONS FOR EMPIRICAL TESTING

6.1 Main Propositions

Proposition 1 (Epistemic Integration): Implementing EPCF increases the probability that strategic assumptions from foresight are genuinely tested during the decision-making process, reflected in higher IPDG scores on the information filtering dimension.

Proposition 2 (Decision Quality): Decisions that undergo the full EPCF process (foresight + PDG + evaluation) have higher DQI scores compared to decisions undergoing only one or two stages, after controlling for contextual factors (bureaucratic capacity, political stability, resource availability).

Proposition 3 (Institutional Learning): Organizations consistently applying EPCF show a decrease in PDRR over the long term (3–5 years) due to an increased ability to detect and correct epistemic flaws before decisions are made. Thus, EPCF functions as an institutional mechanism for structured policy learning across cycles, aligning with the concept of policy learning developed by Hall (1993).

Proposition 4 (Context Moderation): The effectiveness of EPCF is moderated by institutional capacity: organizations with higher policy analytical capacity (Wu, Ramesh & Howlett, 2015; Howlett & Ramesh, 2016) will derive greater benefit from implementing EPCF.

6.2 Qualified Claims

Acknowledging the complexity of policy determinants, the propositions above are framed in terms of probability and association, not causal determinism. EPCF does not claim to guarantee policy success, but contributes to:

  • Improved quality of pre-decision reasoning.
  • Reduced risk of failure due to epistemic flaws.
  • Strengthened institutional learning across policy cycles.

7. INSTITUTIONAL PREREQUISITES AND LIMITATIONS

7.1 Prerequisites for Effective Implementation

EPCF is not self-executing. Its effectiveness depends on:

  1. Leadership commitment – Adopting EPCF requires support from the highest level of the organization, as it demands changes in decision-making procedures.
  2. Analytical capacity – The organization must have staff skilled in foresight, assumption testing, and epistemic evaluation.
  3. Culture of openness – PDG requires a safe space for dissent; without this, structured challenge mechanisms become empty formalities.
  4. Documentation system – All stages must be well-documented to enable epistemic audit and feedback.
  5. Enforcement mechanisms – There must be consequences (positive or negative) linked to the quality of the pre-decision process to shift incentives.

7.2 Conceptual Limitations

  1. Does not replace politics – EPCF does not eliminate the role of power, interests, and negotiation in policy. It only ensures that the reasoning process is documented and accountable.
  2. Does not guarantee correct decisions – Process quality does not perfectly correlate with outcome quality, especially under extreme uncertainty.
  3. Procedural burden – Implementing EPCF requires additional time and resources. Therefore, it is recommended for strategic, high-risk, high-impact decisions, not for all decision types.
  4. Risk of ritualism – Like all governance procedures, EPCF can be reduced to a documentation ritual without substance. Meta-governance mechanisms (such as those proposed in the IPDG) are needed to mitigate this risk.

8. RESEARCH AGENDA AND PILOT DESIGN

8.1 Realistic Pilot Design

EPCF can be tested through a phased approach:

PhaseDurationActivityOutput
90-Day Pilot3 monthsApply EPCF to 2-3 strategic decisions in one organizational unitPilot report, IPDG scores, implementation feedback
Comparative Study1-2 yearsCompare policies with and without EPCF in 3-5 countriesComparative analysis, identification of contextual factors
Longitudinal Study3-5 yearsLong-term observation of organizations adopting EPCFPDRR trends, institutional learning, proposition validation

8.2 Potential Partners for Testing

The following institutions have the mandate and capacity to test a framework like EPCF:

InstitutionRelevant UnitPotential Form of Testing
OECDPublic Governance Directorate, Regulatory Policy Committee, Strategic Foresight UnitPolicy lab, member country case studies
UNDPGlobal Centre for Public Service Excellence, evaluation officesPilot in development programs, policy evaluation
World BankGlobal Governance Practice, Independent Evaluation Group, Foresight GroupGovernance experiments, cross-country comparative studies
EUEU Policy Lab, Joint Research CentreHorizon Europe research projects, innovation policy
INTOSAIVarious country committeesPerformance audits with an epistemic dimension
Global Think TanksRAND, ODI, Brookings, Overseas Development InstitutePolicy research, program evaluation

8.3 Pilot Success Indicators

  • Feasibility: Can EPCF be implemented with available resources?
  • Acceptability: Do decision-makers and staff accept this framework?
  • Early impact: Is there an increase in IPDG scores? Are assumptions from foresight genuinely tested?
  • Learning: Does the organization show an increased ability to detect epistemic flaws?

9. CONCLUSION: CONTRIBUTION TO GLOBAL GOVERNANCE

9.1 Theoretical Contributions

  1. Expands policy cycle theory (Lasswell, 1956; Anderson, 1975; Parsons, 1995) by adding a previously hidden epistemic dimension.
  2. Integrates three previously separate literatures: futures studies, governance studies, and evaluation studies.
  3. Bridges macro-level analysis (foresight) with micro-institutional mechanisms (PDG) and empirical feedback (evaluation).
  4. Contributes to policy design theory (Howlett, 2019; Peters, 2018) by operationalizing how policy design quality can be enhanced through longitudinal assumption testing.
  5. Operationalizes the concept of policy learning (Hall, 1993) into a structured institutional mechanism, enabling organizations to learn not only from policy outcomes but also from the reasoning processes that preceded them.

9.2 Practical Contributions

  1. Provides a common language for three policy communities that typically work in silos.
  2. Offers concrete instruments (IPDG, DQI, PDRR) ready for testing.
  3. Helps organizations build deeper institutional learning systems—not just learning from outcomes, but learning from the thought processes preceding decisions.

9.3 Limitations and Invitation

EPCF is a conceptual framework that has not been empirically validated. It is offered as:

"A message in a bottle" for the global policy community—an invitation to test, critique, and, if proven useful, adapt it to respective contexts.

As stated in the original CAA document:

"Let time judge whether this idea will be useful. For now, it is simply placed here, waiting to be found by those who need it."

EPCF is part of that message. Now, it is up to the global policy community to find, test, and—if worthy—use it.


BIBLIOGRAPHY

Freely Accessible Sources:

  • Anderson, J. E. (1975). Public Policymaking: An Introduction. Houghton Mifflin.
  • Bennett, C. J., & Howlett, M. (1992). The Lessons of Learning: Reconciling Theories of Policy Learning and Policy Change. Policy Sciences.
  • Bovens, M. (2007). Analysing and Assessing Accountability: A Conceptual Framework. European Law Journal.
  • Fukuyama, F. (2013). What Is Governance? Governance.
  • Hall, P. (1993). Policy Paradigms, Social Learning, and the State. Comparative Politics.
  • Howlett, M. (2019). Designing Public Policies: Principles and Instruments. Routledge.
  • Howlett, M., & Ramesh, M. (2016). Achilles' Heels of Governance: Critical Capacity Gaps. Governance.
  • Howlett, M., Ramesh, M., & Wu, X. (2015). Understanding the Persistence of Policy Failures. Policy Sciences.
  • Janis, I. (1982). Groupthink: Psychological Studies of Policy Decisions and Fiascoes.
  • Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica.
  • Lasswell, H. (1956). The Decision Process: Seven Categories of Functional Analysis. University of Maryland Press.
  • North, D. (1990). Institutions, Institutional Change and Economic Performance. Cambridge University Press.
  • Ostrom, E. (2005). Understanding Institutional Diversity. Princeton University Press.
  • Parsons, W. (1995). Public Policy: An Introduction to the Theory and Practice of Policy Analysis. Edward Elgar.
  • Peters, B. G. (2018). Policy Problems and Policy Design. Edward Elgar.
  • Simon, H. (1947). Administrative Behavior: A Study of Decision-Making Processes in Administrative Organization. Macmillan.
  • Wu, X., Ramesh, M., & Howlett, M. (2015). Policy Capacity: A Conceptual Framework for Understanding Policy Competences. Policy and Society.

ABUWT Documents:

  • CAA v3.2 – Cognitive Accountability Architecture
  • PDG v4.1 – Pre-Decision Governance
  • IPDG 47 Indicators and Meta-Governance
  • DQI – Decision Quality Index
  • PDRR – Post-Decision Reversal Rate
  • Fragility of the PDG Concept

APPENDIX: CONCISE MEASUREMENT PROTOCOLS

A. Concise IPDG Form (for Pilot Testing)

NoIndicatorScore (0/1/2)Notes
F1Root cause analysis
F2Stakeholder participation
O1Exploration of alternatives (≥3 options)
O2Cost-benefit/risk analysis for all options
I1Verification of key data by independent party
I2Testing critical assumptions (including from foresight)
D1Formal challenger mechanism
D2Dissent documentation and follow-up
Total Score/16

B. Concise DQI Form

DimensionScore (1-5)Evidence/Source
Implementation Success
Goal Achievement
Resource Efficiency
Long-term Impact
Stakeholder Satisfaction
Average/5

C. PDRR Form

YearTotal Strategic DecisionsReversals due to Analytical FlawsPDRR (%)
T-2
T-1
T

Example Entry:

YearTotal Strategic DecisionsReversals due to Analytical FlawsPDRR (%)
2023202 (e.g., infrastructure project A and energy policy B)10%
2024221 (agricultural policy C)4.5%
20252500%

1 Integration with Islamic scholarly tradition: The CAA and PDG frameworks underpinning EPCF also resonate with classical Islamic scholarly traditions, such as sanad criticism in Hadith sciences (as a form of assumption testing), the principle of jarh wa ta'dil (critique of information source credibility), and shura-based deliberation (as a form of structured dissent). This resonance indicates that epistemic accountability is a universal need emerging across different civilizations. However, to maintain focus on global applicability and avoid potential cross-cultural miscommunication, further elaboration on this integration is not included in the main EPCF framework. Interested readers may refer to the original CAA v3.2 document.

License: This document is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0). Feel free to read, adapt, and share it for non-commercial purposes, provided attribution is given to the source.

Contact: tpapgtk@gmail.com

Final Version: March 7, 2026