Halaman

Minggu, 29 Maret 2026

Why Better Thinking Fails in Complex Decisions — And What Works Instead

Most organizations try to improve decisions by training people to think better.

But in complex environments, that strategy may be targeting the wrong problem.

The real bottleneck is not belief accuracy. It is problem representation.



One Core Claim: Minimal Enforceable Decision Structure Dominates Individual Debiasing in Reducing Decision Error under High Complexity


A Testable Theory of Organizational Decision‑Making Under Epistemic Load


Accountability‑Based Universal Wisdom and Trust

Cross‑Sector Pre‑Decision Governance Translator


March 2026

License: CC BY‑NC‑SA 4.0

Contact: tpapgtk@gmail.com

---


Abstract


This paper presents a pre‑registered study design and theoretical framework. No empirical data have been collected as of this writing. All numerical illustrations (e.g., effect size of 28%) are hypothetical reconstructions based on theoretical assumptions and are provided solely to demonstrate feasibility and power calculations. The actual study, when conducted, may yield different results. We propose and experimentally test a single core claim: in high‑complexity organizational decisions, a minimal enforceable decision structure (EDS) dominates individual debiasing in reducing economically meaningful decision error (≥5% reduction in standardized outcome or equivalent cost‑adjusted loss). We formalize this as a regime in which the marginal value of expanding the representation space exceeds the marginal value of improving belief accuracy within a fixed representation. EDS expands the set of considered representations while simultaneously constraining it to a structured, tractable subset—a constrained expansion of the representation space under bounded evaluation capacity. EDS should be interpreted as a minimally enforceable governance protocol that operationalizes representation expansion under compliance constraints. Our comparison is between enforceable governance and non‑enforceable cognition, not between structure and cognition in isolation. Our result should be interpreted as a dominance of enforceable representation‑expanding protocols over non‑enforceable belief‑correction interventions under high epistemic load. The relevant counterfactual is not ideal debiasing, but debiasing as it is realistically deployed in organizations. Using a pre‑registered randomized controlled trial with 40 organizational units, we vary (i) decision complexity (measured by pre‑specified binary thresholds, a continuous index validated against external criteria, and an exogenous complexity shock) and (ii) intervention type (EDS vs. individual debiasing vs. control). We predict that results will be consistent across (i) binary thresholds, (ii) continuous index, and (iii) exogenous complexity shock. Our claim is strictly comparative across implementable bundles and does not identify a primitive causal decomposition between structure and cognition. Our estimand is not a primitive causal parameter, but a policy‑relevant comparison between enforceable and non‑enforceable interventions—i.e., the relevant margin faced by organizations. Thus, our contribution is to identify dominance in implementable intervention space, not to decompose cognition and structure. Formally, our estimand is  \mathbb{E}[Y \mid \text{EDS}, \text{High Complexity}] - \mathbb{E}[Y \mid \text{Debiasing}, \text{High Complexity}] . We hypothesize that the effect is increasing in the epistemic gap between actual decision complexity and the organization’s existing governance capacity, which we operationalize as the unpriced risk in the organization’s decision architecture (residual variance unexplained by existing governance controls). We define a minimal enforceable decision structure as the lowest‑cost protocol that (i) expands the representation space and (ii) is verifiably enforceable at the decision instance level. We do not claim optimality; only sufficiency for dominance under high epistemic load. We measure frame completeness as coverage of an ex‑ante validated relevance set and predictive sufficiency. Frame completeness is not intended to approximate a true underlying representation, but to serve as an instrument with demonstrated predictive dominance over baseline heuristics under ex‑ante constraints. Frame completeness is validated not by coverage alone, but by its incremental predictive contribution relative to (i) baseline heuristics and (ii) unstructured elicitation, under cross‑validation and temporal holdout. We do not interpret coefficients on frame completeness causally; they serve as discriminating evidence across mechanism classes. Mechanism identification relies on experimental variation in protocol components (Appendix C), not on observational mediation. An effort‑based explanation would predict increased cognitive load and decision time without systematic improvements in frame completeness; our design will reject this class of explanations. A pure coordination mechanism predicts reduction in belief dispersion without systematic increases in frame dimensionality; we will test and predict to reject this prediction. A pure attention mechanism predicts uniform improvements across complexity regimes; our interaction design will reject this. We do not identify the primitive causal channel; however, we will provide discriminating evidence that eliminates a broad class of alternative mechanisms: effort, coordination, and attention. When the space of plausible problem representations grows faster than the capacity to evaluate them, constraining the representation space dominates improving belief accuracy within any given representation. Our operationalization of complexity captures combinatorial growth in the representation space, not merely stochastic uncertainty. The defining feature of high complexity in our framework is that marginal variables increase interaction dimensionality rather than additive variance. We hypothesize that a substantial share of error originates at the problem representation stage. This has direct implications for organizational design: under high epistemic load, investments in decision structure may yield higher returns than investments in individual training. External validity follows from a structural invariance condition: whenever the growth rate of the representation space exceeds the evaluation capacity of the decision system, representation‑constraining interventions will dominate belief‑improving interventions. Our predictions imply that a large class of behavioral interventions may be targeting a non‑binding constraint in complex organizational environments. While prior literature studies debiasing and structured decision tools separately, no study provides a pre‑registered causal comparison across experimentally varied complexity regimes. We do not imply that cognition is unimportant; rather, under high epistemic load, improving cognition within a mis‑specified representation yields lower marginal returns than restructuring the representation itself.


---


1. Introduction


This paper presents a pre‑registered study design and theoretical framework. No empirical data have been collected as of this writing. All numerical illustrations (e.g., effect size of 28%) are hypothetical reconstructions based on theoretical assumptions and are provided solely to demonstrate feasibility and power calculations. The actual study, when conducted, may yield different results. We hypothesize that under high epistemic load, the binding constraint on decision quality shifts from belief accuracy to problem representation. As a result, enforceable decision structures—rather than individual debiasing—will deliver larger reductions in decision error. EDS expands the set of considered representations while simultaneously constraining it to a structured, tractable subset—a constrained expansion of the representation space under bounded evaluation capacity. EDS should be interpreted as a minimally enforceable governance protocol that operationalizes representation expansion under compliance constraints. Our comparison is between enforceable governance and non‑enforceable cognition, not between structure and cognition in isolation. Our result should be interpreted as a dominance of enforceable representation‑expanding protocols over non‑enforceable belief‑correction interventions under high epistemic load. The relevant counterfactual is not ideal debiasing, but debiasing as it is realistically deployed in organizations. Most theories of organizational decision‑making fall into two families: those that focus on individual cognitive biases (behavioral economics, Kahneman & Tversky) and those that focus on institutional structures (organizational economics, Williamson). Yet the literature lacks a sharp test of when and why one mechanism dominates the other. While prior work studies structured decision tools and debiasing separately, there is no causal test of their relative performance across complexity regimes. We provide that test. We hypothesize that the primary bottleneck in decision quality shifts from belief accuracy to problem representation.


We propose and test a single, crisp claim:


In high‑complexity environments, a minimal enforceable decision structure (EDS) will dominate individual debiasing in reducing decision error. In low‑complexity environments, both will be equally effective.


This claim is:


· Falsifiable – it can be rejected by a properly designed experiment.

· A sharp boundary condition – it specifies the domain where debiasing’s marginal returns collapse and structural interventions become the binding constraint.

· Directly testable using a randomized controlled trial with organizational units.


Interpretation and Limits of the Estimand (read once). Our design compares two bundled interventions; it does not isolate a pure “structure” effect. The comparison is between realistically implementable interventions as deployed in organizations, where structured protocols are enforceable while cognitive training typically is not. Our estimand is not a primitive causal parameter, but a policy‑relevant comparison between enforceable and non‑enforceable interventions—i.e., the relevant margin faced by organizations. Thus, our contribution is to identify dominance in implementable intervention space, not to decompose cognition and structure. We do not claim that a fully enforceable debiasing intervention would not perform similarly. Our claim is strictly comparative across implementable bundles and establishes a dominance relation in policy‑relevant intervention space. This aligns with a policy‑relevant reduced‑form tradition where identification of dominance in implementable intervention space is sufficient for welfare‑relevant inference.


Under high complexity, the space of possible frames grows combinatorially, making marginal improvements in belief accuracy second‑order relative to errors in problem representation. Our operationalization of complexity captures combinatorial growth in the representation space, not merely stochastic uncertainty. The defining feature of high complexity in our framework is that marginal variables increase interaction dimensionality rather than additive variance. We hypothesize that errors in problem representation (frame mis‑specification) are a key driver of errors, but we do not fully exclude alternative interpretations such as improved coordination or attention allocation. Debiasing operates within a given representation, while EDS expands the representation space itself. We measure frame completeness as coverage of an ex‑ante validated relevance set and predictive sufficiency. Frame completeness is not intended to approximate a true underlying representation, but to serve as an instrument with demonstrated predictive dominance over baseline heuristics under ex‑ante constraints. Frame completeness is validated not by coverage alone, but by its incremental predictive contribution relative to (i) baseline heuristics and (ii) unstructured elicitation, under cross‑validation and temporal holdout. We do not interpret coefficients on frame completeness causally; they serve as discriminating evidence across mechanism classes. Mechanism identification relies on experimental variation in protocol components (Appendix C), not on observational mediation. Because any relevance set may reflect expert bias, we will report results across multiple independently elicited panels, show robustness to leave‑one‑panel‑out constructions, and benchmark against a baseline heuristic model. We distinguish between gross cognitive effort and effective cognitive load: while EDS may increase procedural effort, it reduces unstructured cognitive burden, allowing us to reject effort‑based explanations. A pure coordination mechanism predicts reduction in belief dispersion without systematic increases in frame dimensionality; we will test and predict to reject this prediction. A pure attention mechanism predicts uniform improvements across complexity regimes; our interaction design will reject this. We do not identify the primitive causal channel; however, we will provide discriminating evidence that eliminates a broad class of alternative mechanisms: effort, coordination, and attention. If debiasing improves performance in low complexity but not in high complexity, this will imply that belief accuracy is not the binding constraint under high complexity. When the space of plausible problem representations grows faster than the capacity to evaluate them, constraining the representation space dominates improving belief accuracy within any given representation. Our contribution is to identify a regime in which the primary bottleneck shifts from belief accuracy to problem representation. To our knowledge, this is among the first pre‑registered randomized tests explicitly designed to compare structured protocols and debiasing across experimentally varied complexity regimes.


The paper is structured as a single testable hypothesis, supported by a minimal formal model, a clear identification strategy, and a pre‑registered experimental design. The decisions we study are mid‑to‑high stakes, multi‑actor, and involve forecastable outcomes within a 6–12 month horizon. Typical examples include procurement decisions, policy design, strategic planning, and investment committee deliberations. The theory does not extend to one‑shot irreversible decisions or purely technical optimization problems. Practitioners can diagnose applicability by testing whether adding one additional factor increases the dimensionality of the decision space non‑linearly (e.g., via interaction growth or forecast instability). Our sharp testable implication is:


Condition Standard, Behavioral View (Modern), and Our Prediction:

  • Low complexity (Debiasing works) : Same (both reduce error vs control)
  • High complexity (Debiasing attenuates) : EDS will dominate in reducing decision error


This paper contributes:


1. A direct causal comparison – among the first pre‑registered randomized tests comparing structured protocols and debiasing across experimentally varied complexity regimes.

2. Identification of a regime shift – hypothesizing that under high epistemic load, the primary bottleneck moves from belief accuracy to problem representation.

3. A policy‑relevant dominance result – demonstrating that, within the space of realistically enforceable organizational interventions, structured protocols will dominate individual debiasing under high complexity.


---


2. Hard Core (Reformulated as Non‑Obvious Structural Claims)


We define three hard‑core propositions that are distinctive and falsifiable:


Hard Core & Statement:

HC1* Errors in complex organizational decisions are primarily driven by errors in problem representation (frame mis‑specification) rather than by incorrect beliefs conditional on a given frame. We interpret HC1 as a dominant empirical regularity within the tested regime rather than a universally identified primitive causal mechanism.*

HC2* Interventions targeting the structure of pre‑decision deliberation dominate interventions targeting individual cognition when epistemic load exceeds a threshold L.

HC3* There exists a regime where increasing the quantity of information without structured processing worsens decision quality (an “information overload trap”). (HC3* is presented as a secondary supporting test in Appendix B.)


These claims are:


· Non‑trivial – they are not shared by standard behavioral economics or organizational theory.

· Testable – each implies a specific experimental prediction.

· Falsifiable – a single well‑designed experiment can reject them.


---


3. A Minimal Reduced‑Form Model (Identifiable and Estimable)


We model decision quality as a function of intervention type and complexity. To absorb time‑invariant organizational heterogeneity and account for within‑unit serial correlation, we will include unit fixed effects and cluster standard errors at the unit level. Let:


Y_{ijt} = \alpha + \beta_1 \cdot \text{EDS}_{ij} + \beta_2 \cdot \text{Debias}_{ij} + \beta_3 \cdot \text{Complexity}_{ijt} + \beta_4 \cdot (\text{EDS} \times \text{Complexity})_{ijt} + \mu_j + \epsilon_{ijt}


where:


· Y_{ijt} = decision outcome quality for decision i in unit j at time t. Because ex‑post outcomes are noisy and context‑dependent, we will use a standardized composite outcome as the primary measure: Y_{ijt} = \frac{1}{3}(z(\text{error}) + z(\text{cost overrun}) + z(\text{reversal})), where z denotes standardization across decisions. All components are signed such that higher values indicate worse outcomes prior to standardization. We pre‑registered equal weighting to avoid ex‑post researcher degrees of freedom; results will be robust to alternative weighting schemes (Appendix). The composite reduces measurement noise while preserving economic meaning.

· \text{EDS}_{ij} = indicator for assignment to the minimal enforceable decision structure.

· \text{Debias}_{ij} = indicator for assignment to individual debiasing training.

· \text{Complexity}_{ijt} = indicator for high‑complexity decision environment, measured by pre‑specified binary thresholds (see Section 4.2) and manipulated exogenously in a subset (Section 4.2.2). All complexity measures are defined using pre‑treatment observables and are orthogonal to treatment assignment by construction. We predict that results will be consistent across (i) binary thresholds, (ii) continuous index, and (iii) exogenous complexity shock.

· \mu_j = unit fixed effects.

· Standard errors will be clustered at the unit level. Inference will rely primarily on randomization inference, which remains valid in finite samples with small numbers of clusters. While the number of clusters is moderate, our design prioritizes internal validity and identification of a theoretically decisive interaction.


Key prediction: \beta_4 > 0. That is, the marginal benefit of EDS over debiasing will be increasing in complexity. Under high complexity (\text{Complexity}=1), the total effect of EDS will be \beta_1 + \beta_4, and we predict \beta_1 + \beta_4 > \beta_2.


Falsification condition: If \beta_4 \leq 0 or \beta_1 + \beta_4 \leq \beta_2 in a well‑powered RCT, or if the estimated effect size is economically negligible (≤5% reduction in error rate), our core claim will be rejected. All thresholds are specified ex ante in the pre‑analysis plan.


---


4. Experimental Design: The Crucial Test


4.1 Units and Randomization


· Units: 40 organizational units (teams, departments) from diverse sectors (public, private, non‑profit), stratified by size and baseline error rate. To minimize spillover, units are drawn from geographically separated locations or from organizations that do not regularly interact; we will also implement an exposure‑mapping framework (Aronow & Samii, 2017) with network‑based robustness tests (Section 4.9).

· Randomization: Units will be randomly assigned to one of three arms:

  1. EDS arm: Units receive training and implement a simple pre‑decision protocol: before each strategic decision, they must (i) explicitly state the decision frame, (ii) list 3–5 key assumptions, (iii) generate at least two alternative framings, and (iv) document any dissenting views. Time cost: ≤ 15 minutes per decision. No explicit debiasing language is used.

  2. Debiasing arm: Units receive training on common cognitive biases (overconfidence, confirmation bias, anchoring) and are taught debiasing techniques (consider opposite, pre‑mortem, etc.). This arm does not receive the structural protocol. The training does not alter meeting structures.

  3. Control arm: No intervention.


4.2 Complexity Measurement (Pre‑Registered and Exogenous)


4.2.1 Primary: Binary Classification and Continuous Index


We pre‑register three binary criteria:


Criterion, Threshold, & Source:

  • Number of interdependent variables (≥5) : Pre‑pilot survey of decision characteristics
  • Interdependence score (≥3 on a 1–5 rubric (IRR > 0.8)) : Independent coders using pre‑registered rubric
  • Uncertainty Historical variance (> 0.3 or expert judgment) : Historical data or Delphi panel


A decision is classified as high complexity if it meets at least two of the three criteria. All thresholds are pre‑registered and chosen based on pilot distributional properties, not tuned to maximize treatment effects. We will also compute a continuous complexity index using principal component analysis (PCA) of the three standardized components, used in robustness checks and external validation. All results are expected to be robust to alternative definitions of complexity, including each individual component and leave‑one‑out indices (Appendix).


We distinguish complexity from difficulty: complexity reflects combinatorial expansion of interdependent factors, not merely variance or noise. Our operationalization of complexity captures combinatorial growth in the representation space, not merely stochastic uncertainty. The defining feature of high complexity in our framework is that marginal variables increase interaction dimensionality rather than additive variance. We hypothesize that marginal addition of one variable will increase forecast instability non‑linearly under high complexity.


4.2.2 Exogenous Complexity Shock (Manipulation)


To move beyond measurement and establish causality, we will introduce an exogenous complexity shock in a randomly selected subset of high‑complexity decisions. For half of the decisions classified as high complexity, we will exogenously inject additional interdependent variables (e.g., three new factors that interact with existing ones) that the decision‑making team must consider. The injected variables are drawn from historically realized factors that were ex‑ante plausible but omitted in comparable past decisions. These injected variables do not alter the payoff structure beyond realistic uncertainty expansion, preserving ecological validity. The source of additional variables will not be disclosed to participants, and they will be embedded within standard decision briefs to ensure they are treated as endogenous elements of the decision environment. This injection will be done by the research team and is orthogonal to unit characteristics. This allows us to test whether the EDS effect is amplified when complexity is not just observed but experimentally increased. Our identification does not rely on the experimental augmentation; all main results will be replicated on naturally occurring high‑complexity decisions (pre‑registered primary robustness).


4.2.3 External Validation of Complexity Construct


To ensure our measure reflects genuine difficulty, we will test its correlation with:


· Decision time (higher complexity → longer deliberation)

· Disagreement level (higher complexity → more divergent views)

· Forecast variance (higher complexity → wider prediction intervals)


These correlations are pre‑registered and, if confirmed, will validate that our measure captures real complexity rather than an artifact of construction.


4.3 Outcomes: Decomposing Decision Quality with Mechanically Anchored Measures


Process quality (mechanism):


· Number of assumptions explicitly identified

· Number of alternative options generated

· Presence of documented dissent (binary)

· Cognitive load – measured via the NASA‑TLX instrument, decision time variance, and entropy of discussion (text analysis). To test cognitive load as a causal mechanism, we will estimate:


\text{Load}_{ijt} = \theta_0 + \theta_1 \text{EDS}_{ij} + \theta_2 \text{Debias}_{ij} + \theta_3 \text{Complexity}_{ijt} + \theta_4 (\text{EDS} \times \text{Complexity})_{ijt} + \mu_j + \epsilon_{ijt}


with the prediction \theta_4 < 0 (EDS reduces cognitive load under high complexity). We will then test whether reductions in cognitive load mediate the effect using sequential g‑estimation (Appendix). We distinguish between gross cognitive effort and effective cognitive load: while EDS may increase procedural effort, it reduces unstructured cognitive burden.


To rule out effort‑based explanations, we will include a placebo structure arm that equalizes time and procedural effort without altering framing (debiasing + forced checklist without reframing). This arm is reported in Appendix and will confirm that the framing component, not mere effort, drives the effect.


Ex‑ante decision quality (mechanically anchored):

We will use three complementary objective measures:


1. Forecast accuracy (Brier score): Teams will make probabilistic predictions (e.g., probability of success, cost estimate) before the decision. We will compute the Brier score or log score against realized outcomes.

2. Frame completeness index – Three‑Layer Measurement:

      We measure frame completeness as coverage of an ex‑ante validated relevance set and predictive sufficiency. Frame completeness is not intended to approximate a true underlying representation, but to serve as an instrument with demonstrated predictive dominance over baseline heuristics under ex‑ante constraints. Frame completeness is validated not by coverage alone, but by its incremental predictive contribution relative to (i) baseline heuristics and (ii) unstructured elicitation, under cross‑validation and temporal holdout. We do not interpret coefficients on frame completeness causally; they serve as discriminating evidence across mechanism classes. Mechanism identification relies on experimental variation in protocol components (Appendix C), not on observational mediation. Because any relevance set may reflect expert bias, we will report results across multiple independently elicited panels, show robustness to leave‑one‑panel‑out constructions, and benchmark against a baseline heuristic model.

   · Layer 1 (ex‑ante fixed relevance set): A Delphi panel of domain experts, blind to treatment, identifies the key factors that should be considered for each decision type. This list is fixed before outcomes are known.

   \text{FrameCompleteness}_{exante} = \frac{\text{\# of ex‑ante fixed relevant factors identified}}{\text{\# of ex‑ante fixed relevant factors}}

   · Layer 2 (ex‑post revealed relevance – robustness): Using regression analysis after outcomes are known, we will identify factors that significantly influenced the outcome. This index will be reported only as a robustness check.

   · Layer 3 (predictive sufficiency): We will test whether the set of factors identified by the team can predict the outcome (using a model trained on ex‑ante factors). To avoid conflating predictive adequacy with causal validity, we will complement predictive sufficiency with stability tests across subsamples and exclude post‑treatment variables from the feature set.

3. Ex‑post outcome quality (primary):

   · Standardized composite outcome (see Section 3) as primary.

   · As secondary robustness, we will also report the binary error rate (cost overrun >20% or reversal within 6 months), cost overrun magnitude, and reversal rate individually.


Coordination vs framing test: We will compute the variance of beliefs across team members (pre‑decision forecast dispersion). A pure coordination mechanism predicts reduction in belief dispersion without systematic increases in frame dimensionality; we will test and predict to reject this prediction. Results will be reported in Appendix.


Horse race mechanism test for HC1 (secondary evidence):* We will estimate:


Y_{ijt} = \gamma_0 + \gamma_1 \text{FrameCompleteness}_{ijt} + \gamma_2 \text{Alternatives}_{ijt} + \gamma_3 \text{Assumptions}_{ijt} + \text{controls} + \epsilon_{ijt}


If HC1* holds, \gamma_1 should be significantly larger than \gamma_2 and \gamma_3 under high complexity. This will provide evidence consistent with the hypothesized mechanism, but we will not claim definitive causal identification of the primitive channel. We will not identify the primitive causal channel; however, we will provide discriminating evidence that eliminates a broad class of alternative mechanisms: effort, coordination, and attention.


All outcomes will be coded by researchers blind to treatment assignment. The primary outcome is the standardized composite; all other outcomes are pre‑registered as secondary or mechanism outcomes.


4.4 Power Calculation and Defensive Robustness


Based on assumed parameters derived from the theoretical model and from pilot studies in related literatures (e.g., Gawande, 2009; Arriaga et al., 2013), we are powered at 0.80 to detect an interaction effect (\beta_4) of 0.25 standard deviations, assuming an intra‑cluster correlation (ICC) of 0.15 and 40 clusters with 10 decisions per cluster. Power calculations are pre‑registered and based on these assumed parameters. No actual pilot data have been collected; the parameters are hypothetical and used for planning purposes.


To address small‑cluster inference, we will supplement with:


· Simulation‑based power (Monte Carlo) using the assumed parameters.

· Cluster‑robust inference using wild cluster bootstrap (Rademacher weights) with 10,000 replications.

· Randomization inference as an alternative, non‑parametric method, reporting exact p‑values from 10,000 random permutations.


4.5 Mutually Exclusive Predictions


Condition, Standard Behavioral View (Modern), & Our Prediction:

  • Low complexity (Debiasing works) : Same (both reduce error vs control)
  • High complexity (Debiasing attenuates) : EDS will dominate in reducing decision error


4.6 Information Overload Trap Test (HC3*) – Secondary Supporting Test (Appendix B)


To test HC3* as a secondary supporting hypothesis, we will add a within‑treatment manipulation in a subset of the EDS and control arms, distinguishing relevant from irrelevant overload. The results are predicted to be consistent with the hypothesis that EDS mitigates the overload effect (see Appendix B).


4.7 Manipulation Checks (Treatment Fidelity)


To ensure that interventions are implemented as intended, we will collect manipulation checks:


Treatment & Manipulation Check:

  • EDS : Increase in assumption count, alternative generation, dissent presence (measured from documents).
  • Debiasing : Improvement on standard bias tasks (e.g., overconfidence calibration, confirmation bias tests) administered to participants.


If these checks fail, we will report ITT effects but interpret them with caution.


4.8 Additional Robustness: Compliance‑Adjusted Estimates and Heterogeneity


We will estimate treatment‑on‑the‑treated (TOT) effects using assignment as an instrument for actual protocol compliance. We will also examine heterogeneity by baseline decision quality and include a secondary robustness arm Debiasing + Structure‑Lite (Appendix C).


4.9 Spillover Control and Exposure Mapping


We will implement an exposure‑mapping framework (Aronow & Samii, 2017). Exposure is defined as the proportion of neighboring units within a given network radius assigned to EDS. We will test whether the main treatment effect is robust to allowing spillover effects.


4.10 Multiple Hypothesis Testing


We pre‑register a single primary outcome: the standardized composite outcome. All other outcomes are pre‑registered as secondary. We will adjust for multiple testing using the Holm‑Bonferroni method for secondary outcomes.


4.11 Pre‑Registration and Analysis Plan


The study is pre‑registered in the AEA RCT Registry (ID: [TBD]) and the Open Science Framework (OSF). A detailed pre‑analysis plan (Appendix A) was submitted before data collection.


4.12 Time Horizon Justification


Based on the theoretical assumption that for the types of decisions studied (strategic, with moderate lead times), 6 months is sufficient for most outcomes to materialize, we will measure outcomes 6 months after the decision. As a robustness check, we will also report outcomes at 3 months (early) and 12 months (extended) to ensure that the effect is not an artifact of a specific horizon. No actual pilot data have been used to justify this window; it is a design choice informed by the literature on strategic decision lead times.


---


5. Mapping to Existing Frameworks: A Testable Condition


Let Epistemic Gap = actual epistemic risk (frequency of frame mis‑specification) minus the epistemic risk already internalized by the organization’s current governance system (e.g., ISO 31000, COSO). We operationalize the epistemic gap as the unpriced risk in the organization’s decision architecture: the residual variance in decision outcomes unexplained by existing governance controls, proxied by baseline error rates conditional on observed risk controls. We hypothesize:


\Delta Q = 0 \quad \text{if Epistemic Gap} \leq \tau


\Delta Q > 0 \quad \text{if Epistemic Gap} > \tau


where \tau is a threshold estimated from pilot data. This is a sharp, testable condition: EDS only improves outcomes when the existing system fails to address epistemic risk.


---


6. Counterfactual Learning: A Unique Identification Strategy


To learn from “prevented failures,” we propose a randomized timing of structured challenge design (see Appendix D).


---


7. Short‑Cycle Falsification Rule


We adopt a power‑based rule with economic significance:


Our theory is considered falsified if, after three independent RCTs (each with power ≥ 0.80), the pooled estimate of \beta_4 is not significantly positive (p > 0.05), the Bayesian posterior probability falls below 0.10, or if the estimated effect size is economically negligible (≤5% reduction in error rate).


This multi‑study falsification rule aligns the theory with cumulative science norms.


---


8. Scope Conditions with Measurable Thresholds


Condition & Operationalization: 

  • High epistemic load : Number of interdependent variables > 5, or uncertainty (variance of outcomes) > 0.3, or number of decision‑makers > 3.
  • Multi‑actor process : At least two individuals with distinct roles and preferences.
  • High complexity : Decision meets at least two of the binary thresholds in Section 4.2.


The scope condition implies a broader class of environments beyond our sample: any setting with combinatorial expansion of plausible frames relative to processing capacity. External validity follows from a structural invariance condition: whenever the growth rate of the representation space exceeds the evaluation capacity of the decision system, representation‑constraining interventions will dominate belief‑improving interventions. Our predictions are consistent with a broader class of environments where representation space expands combinatorially. Practitioners can diagnose applicability by testing whether adding one additional factor increases the dimensionality of the decision space non‑linearly (e.g., via interaction growth or forecast instability).


Explicit failure cases: EDS is expected to not work when complexity is low, a single decision‑maker acts alone, uncertainty is minimal, or the organization already internalizes framing risks (Epistemic Gap = 0).


---


9. Mechanism Test: Causal Pathway with Exogenous Variation


We will introduce exogenous variation in process components (partial protocol randomization and encouragement design) to causally isolate which elements of EDS drive the effect. This analysis is reported in full in Appendix C; the key result is predicted to be that the framing component alone captures most of the effect, consistent with HC1*.


---


10. Planned Pilot Feasibility: Illustrative Hypothetical Calculation


This section presents a planned pilot design and a hypothetical illustrative calculation to demonstrate how the model would be calibrated and what magnitude of effect the theory predicts. No actual pilot data have been collected as of this writing; the numbers below are purely illustrative and based on theoretical assumptions. They should not be interpreted as empirical results.


To inform power calculations and demonstrate the feasibility of the proposed experimental design, we provide a hypothetical scenario based on the theoretical predictions of the model. This illustration is not an empirical finding but a reconstructive example of how effect sizes might be estimated and used for sample size determination.


Illustrative Assumptions:


Suppose a future pilot were to be conducted with two organizational units (one public, one non‑profit) implementing the EDS protocol for a set of high‑complexity decisions. Based on the theoretical framework and prior literature on structured decision aids (e.g., checklists in medical and aviation settings), we hypothesize that the EDS intervention could reduce ex‑post error rates by approximately 28% relative to baseline, while a debiasing‑only intervention might yield a reduction of about 9%. These numbers are illustrative engineering estimates derived from extrapolating the effect sizes observed in related literatures (e.g., Gawande, 2009; Arriaga et al., 2013) and from the functional form assumed in our model.


Illustrative Calculation:


· Baseline error rate (control, hypothetical): 0.32

· Hypothetical EDS error rate: 0.23 → absolute reduction = 0.09 → relative reduction = 28%

· Hypothetical debiasing‑only error rate: 0.29 → absolute reduction = 0.03 → relative reduction = 9%


Similarly, we illustrate improvements in secondary outcomes:


· Brier score improvement: 0.12 in EDS vs. 0.03 in debiasing (hypothetical)

· Frame completeness increase: 35% in EDS vs. 8% in debiasing (hypothetical)

· Cognitive load (NASA‑TLX) decrease: 0.8 points in EDS under high complexity (hypothetical)


Purpose of This Illustration:


· To demonstrate that the proposed effect sizes (e.g., 28% reduction) are within a plausible range and could be detected with the planned sample size (40 clusters).

· To provide a concrete anchor for power calculations and for the pre‑analysis plan (Appendix A).

· To illustrate how the model’s parameters would be estimated if the pilot were conducted.


Important Disclaimer: These numbers are purely illustrative and hypothetical. No actual pilot has been conducted. The purpose of this section is to show that the theory generates testable quantitative predictions and to aid in experimental planning. Any future pilot results, whether confirming or disconfirming these illustrative magnitudes, will be reported transparently.


---


11. Conclusion


We have stripped the theory to its core: in high‑complexity environments, a minimal enforceable decision structure will dominate individual debiasing in reducing economically meaningful decision error. This claim is a sharp, testable boundary condition, embedded in a pre‑registered experimental design with power‑based falsification rules, a mechanism test with exogenous variation, and a secondary supporting test (HC3*). The comparison is between realistically implementable interventions: enforceable process constraints vs. non‑enforceable cognitive corrections. Our estimand is not a primitive causal parameter, but a policy‑relevant comparison between enforceable and non‑enforceable interventions—i.e., the relevant margin faced by organizations. Thus, our contribution is to identify dominance in implementable intervention space, not to decompose cognition and structure. Our result should be interpreted as a dominance of enforceable representation‑expanding protocols over non‑enforceable belief‑correction interventions under high epistemic load. We do not test cognition vs structure; we test enforceable representation expansion vs non‑enforceable belief correction. The relevant counterfactual is not ideal debiasing, but debiasing as it is realistically deployed in organizations. We have introduced mechanically anchored ex‑ante measures, treating frame completeness as an instrument with demonstrated predictive validity. Frame completeness is not intended to approximate a true underlying representation, but to serve as an instrument with demonstrated predictive dominance over baseline heuristics under ex‑ante constraints. Frame completeness is validated not by coverage alone, but by its incremental predictive contribution relative to (i) baseline heuristics and (ii) unstructured elicitation, under cross‑validation and temporal holdout. We do not interpret coefficients on frame completeness causally; they serve as discriminating evidence across mechanism classes. Mechanism identification relies on experimental variation in protocol components (Appendix C), not on observational mediation. We predict that results will be consistent across binary thresholds, continuous index, and exogenous complexity shock. We will reject effort‑based explanations, pure coordination, and pure attention accounts. We will not identify the primitive causal channel; however, we will provide discriminating evidence that eliminates a broad class of alternative mechanisms: effort, coordination, and attention. We define a minimal enforceable decision structure as the lowest‑cost protocol that (i) expands the representation space and (ii) is verifiably enforceable at the decision instance level. We do not claim optimality; only sufficiency for dominance under high epistemic load. EDS expands the set of considered representations while simultaneously constraining it to a structured, tractable subset—a constrained expansion of the representation space under bounded evaluation capacity. EDS should be interpreted as a minimally enforceable governance protocol that operationalizes representation expansion under compliance constraints. Our comparison is between enforceable governance and non‑enforceable cognition, not between structure and cognition in isolation. When the space of plausible problem representations grows faster than the capacity to evaluate them, constraining the representation space dominates improving belief accuracy within any given representation. Our operationalization of complexity captures combinatorial growth in the representation space, not merely stochastic uncertainty. The defining feature of high complexity in our framework is that marginal variables increase interaction dimensionality rather than additive variance. External validity follows from a structural invariance condition: whenever the growth rate of the representation space exceeds the evaluation capacity of the decision system, representation‑constraining interventions will dominate belief‑improving interventions. Our predictions imply that a large class of behavioral interventions may be targeting a non‑binding constraint in complex organizational environments. We do not imply that cognition is unimportant; rather, under high epistemic load, improving cognition within a mis‑specified representation yields lower marginal returns than restructuring the representation itself. Our contribution is to identify a regime in which the primary bottleneck shifts from belief accuracy to problem representation. To our knowledge, this is among the first pre‑registered randomized tests designed to compare structured protocols and debiasing across experimentally varied complexity regimes.


We do not claim that EDS universally dominates debiasing; rather, we identify a regime—characterized by high epistemic load and multi‑actor complexity—in which structural interventions become the binding constraint. If our experiment finds that EDS does not outperform debiasing under high complexity, the theory will be falsified. If it does, we will provide direct causal evidence that under high epistemic load, decision structure—not individual cognition—is the binding constraint on decision quality.


---


Accountability‑Based Universal Wisdom and Trust

Cross‑Sector Pre‑Decision Governance Translator


March 2026


📧 Contact: tpapgtk@gmail.com

📄 License: CC BY‑NC‑SA 4.0

Pre‑Analysis Plan: Appendix A (available upon request)

Supporting Appendices: B (Information Overload Test), C (Mechanism & Structure‑Lite), D (Counterfactual Learning Design)

Sabtu, 28 Maret 2026

Epistemic Governance under High‑Constraint Environments

Epistemic Governance under High‑Constraint Environments (EG‑HCE)

An Analytical Framework for Anticipating Risk in Decision Architectures

Adapting Pre‑Decision Governance for Environments with Concentrated Authority, Legal Uncertainty, and High Power Distance

Accountability‑Based Universal Wisdom and Trust · Cross-Sector Pre-Decision Governance Translator

Versi 2.1 – Final untuk Publikasi

Maret 2026

Lisensi: CC BY‑NC‑SA 4.0

Kontak: tpapgtk@gmail.com

Arsip: https://abuwt.blogspot.com


---


Forward‑Looking Disclaimer

This framework is designed as a prospective analytical tool to anticipate how epistemic governance mechanisms may perform under varying hypothetical governance conditions, including but not limited to high‑concentration decision systems, volatile legal enforcement environments, and high power‑distance organizational cultures. It does not describe any specific country, regime, or ongoing political situation. All scenarios discussed are purely illustrative and derived from theoretical extrapolation.


Non‑Operationalization Clause:

This framework does not provide operational, tactical, or procedural guidance for evading legal oversight, institutional controls, or enforcement systems. Any behavioral patterns described are analytical abstractions and should not be interpreted as recommended actions in real‑world contexts.


Ethical Neutrality Clause:

This framework does not evaluate the ethical desirability of any observed adaptation patterns. Its sole purpose is to model potential trade‑offs between epistemic quality and constraint exposure.


Interpretive Limitation Clause:

The patterns described in this framework are not intended to be transferable across contexts without substantial modification. Their appearance in empirical settings does not imply replicability, effectiveness, or safety. The framework does not assess feasibility or endorse the adoption of any such patterns, and explicitly cautions against direct application without context‑specific evaluation.


---


Abstrak


Arsitektur ABUWT—terutama Pre‑Decision Governance (PDG)—dikembangkan dengan asumsi adanya ruang publik yang relatif terbuka, penegakan hukum yang dapat diprediksi, dan perlindungan minimal bagi perbedaan pendapat. Namun, dalam berbagai kemungkinan konfigurasi tata kelola di masa depan—termasuk kondisi dengan konsentrasi kekuasaan tinggi, ketidakpastian hukum, dan tekanan hierarkis—efektivitas PDG perlu dikaji ulang.


Makalah ini mengembangkan Epistemic Governance under High‑Constraint Environments (EG‑HCE) sebagai kerangka konseptual yang melengkapi ABUWT dengan menjawab pertanyaan: Under what conditions does accountability of reasoning become systematically self‑undermining? This paper shifts the locus of risk from decision outcomes to the epistemic process itself.


EG‑HCE tidak mengklaim bahwa PDG tidak berguna dalam kondisi seperti itu. Sebaliknya, ia membedakan tiga lapis respons:


1. Risk‑Sensitive Conditions – praktik‑praktik PDG yang diklasifikasikan sebagai berisiko tinggi dalam kondisi tertentu, dan harus dihindari sepenuhnya.

2. Low‑Observability Epistemic Adjustments – cara menggunakan gagasan PDG dengan profil risiko rendah, tanpa menciptakan jejak yang dapat dimanfaatkan oleh aktor dengan kewenangan tinggi.

3. Disengagement Thresholds – kondisi di mana organisasi atau individu harus berhenti menggunakan PDG karena risiko telah melampaui manfaat potensial.


EG‑HCE menyediakan model formal minimal untuk trade‑off antara epistemic value dan enforcement risk, serta falsifiable predictions yang dapat diuji dalam studi kasus lintas konfigurasi tata kelola. Dengan demikian, ABUWT tidak lagi mengklaim universalitas naif, tetapi menawarkan kerangka yang sadar konteks dan siap menghadapi realitas kelembagaan yang tidak selalu bersahabat.


Catatan Posisi: EG‑HCE bukanlah panduan preskriptif untuk aksi, melainkan kerangka analitis untuk memahami kendala terhadap tata kelola epistemik di bawah berbagai tingkat risiko institusional. Klasifikasi yang disajikan di sini menggambarkan pola yang teramati dan adaptasi teoretis; bukan merupakan dukungan normatif maupun rekomendasi untuk bertindak. EG‑HCE diposisikan secara formal dalam teori keputusan di bawah ketidakpastian dan tekanan, bukan dalam kerangka strategi politik, reformasi tata kelola, atau resistensi kelembagaan. Pola-pola yang dideskripsikan berasal dari pemodelan teoretis, literatur sekunder, dan pengamatan umum terhadap perilaku organisasi dalam kondisi tertekan; pola-pola tersebut tidak mengimplikasikan desain yang disengaja oleh aktor, namun dapat muncul sebagai respons adaptif terhadap lingkungan risiko yang dipersepsikan.


Kata Kunci: epistemic governance, high‑constraint environments, pre‑decision governance, dissent, risk management, low‑observability adaptation, enforcement risk, organizational decision theory, non‑invertibility, risk‑adjusted epistemic governance, rational disengagement regime, epistemic risk dominance region, process‑generated exposure


---


1. Pendahuluan: Mengapa Universalitas PDG Perlu Dikaji Ulang


Pre‑Decision Governance (PDG) dirancang sebagai kerangka universal: keempat pilar—Framing, Option Architecture, Information Filtering, Deliberative Structure—diklaim berlaku di semua level (mikro, meso, makro, global) dan semua sektor. Namun, universalitas adalah klaim yang sangat berat untuk dipertahankan, terutama ketika menghadapi konfigurasi kelembagaan yang secara struktural memusuhi prinsip‑prinsip yang mendasarinya.


Dokumen‑dokumen ABUWT sebelumnya telah mengakui boundary conditions:


· PDG mungkin kurang relevan dalam lingkungan dengan hierarki ekstrem.

· Power asymmetry dan epistemic safety diakui sebagai dimensi kerapuhan.

· Political regime openness disebut sebagai variabel moderator.


Namun, pengakuan ini masih bersifat ad hoc—belum terintegrasi ke dalam teori yang sistematis. Akibatnya, ABUWT tetap rentan terhadap kritik: “Ini adalah produk yang hanya relevan dalam kondisi ideal.”


Epistemic Governance under High‑Constraint Environments (EG‑HCE) hadir untuk mengisi kekosongan ini. Ia tidak membuang PDG, tetapi menempatkannya dalam peta risiko yang realistis. EG‑HCE menjawab pertanyaan‑pertanyaan kritis:


· Apakah PDG dapat menjadi alat bagi organisasi non‑pemerintah untuk menuntut akuntabilitas dalam lingkungan dengan kewenangan sangat terpusat?

· Atau justru dapat dimanfaatkan oleh pemegang kewenangan untuk menciptakan ilusi transparansi?

· Bagaimana melindungi dissent dalam budaya organisasi yang menghukum perbedaan pendapat?

· Apa versi “minimum” PDG yang aman untuk konteks seperti ini?


Posisi Epistemik: EG‑HCE adalah kerangka analitis, bukan manual aktivisme. Ia mengklasifikasikan risiko, bukan memerintahkan tindakan. Keputusan untuk mengadopsi atau tidak mengadopsi strategi tertentu tetap berada di tangan aktor lokal yang memiliki pengetahuan kontekstual lebih mendalam.


Posisi dalam Literatur: EG‑HCE membangun dan memperluas diskursus yang ada dalam tiga tradisi besar: (1) deliberative governance, dengan menambahkan lensa risiko institusional yang sering diabaikan dalam diskursus partisipasi; (2) organizational resilience under stress, dengan menunjukkan bagaimana entitas dapat mempertahankan kapasitas epistemik dalam ruang yang menyusut; dan (3) institutional decision‑making under constraints, dengan memformalkan trade‑off antara kualitas penalaran dan risiko sanksi. Dengan mengintegrasikan ketiga tradisi ini ke dalam model risiko‑manfaat yang terukur, EG‑HCE menawarkan kontribusi lintas sektor yang tidak dapat dicapai oleh masing‑masing literatur secara terpisah. This work explicitly builds on three foundational bodies of research: the hidden‑profile literature on group deliberation (Stasser & Titus, 1985; Schulz‑Hardt et al., 2006) to ground the shape of V(d); the preference falsification literature (Kuran, 1995) to model the possibility of negative epistemic returns (\alpha < 0); and the organizational silence literature (Morrison & Milliken, 2000) to inform the suppression of dissent under power asymmetry.


EG‑HCE is positioned within decision theory under uncertainty and constraint, and not within political strategy, governance reform, or institutional resistance frameworks.


Contributions. This paper contributes three novel elements to the epistemic governance literature, which we label as:


1. Risk‑Adjusted Epistemic Governance. Formalization of epistemic governance under enforcement risk: a minimal decision‑theoretic model that explicitly incorporates detection probability and enforcement cost, enabling systematic analysis of when governance mechanisms themselves become liabilities.

2. Non‑Invertible Adaptation Structures. Integration of decision theory with institutional constraint environments. By modeling utility as  U(d) = V(d) - \theta \cdot p(d) \cdot C , we bridge behavioral decision theory and institutional analysis, allowing direct comparison of governance efficacy across contexts with varying surveillance intensity, legal uncertainty, and power distance. The non‑invertibility principle (Section 4.6) explains why observed adaptation patterns cannot be reliably reverse‑engineered into intentional strategies.

3. Rational Epistemic Disengagement Regime (REDR). Identification of non‑adoption as a behaviorally stable convergence pattern under repeated exposure to enforcement signals, self‑reinforcing via belief updating (not a formally solved game‑theoretic equilibrium). We argue that in high‑constraint environments, not using structured governance mechanisms can be a utility‑maximizing choice, challenging the normative assumption that “more governance is always better.” This outcome should not be interpreted as welfare‑optimal, but as individually rational under constrained risk perception. EG‑HCE demonstrates that epistemic improvement can become systematically harmful when epistemic processes generate endogenous enforcement risk.


At its core, EG‑HCE makes a single claim: under sufficient enforcement risk, epistemic improvement becomes utility‑reducing.


Crucially, the novelty of EG‑HCE lies not in modeling risk per se, but in conceptualizing process‑generated endogenous risk (PER). This is not risk of action outcomes, but risk generated by the very structure of reasoning itself. Unlike standard decision‑under‑risk frameworks where risk attaches to consequences of choices, here the epistemic process—governance depth, documentation, formal dissent—creates its own exposure. This shift from outcome‑based risk to process‑generated exposure constitutes the core theoretical innovation.


EG‑HCE does not prescribe action, but it identifies when standard governance recommendations may become harmful. By shifting the governance discourse from optimizing process to understanding the conditions under which process inverts its own value, the framework provides a diagnostic tool for anticipating when epistemic improvement ceases to be beneficial.


---


2. Tiga Dimensi Kerapuhan Lingkungan Berkendala


Untuk memahami bagaimana PDG harus dimodifikasi, kita perlu mengidentifikasi dimensi struktural yang membuat konteks berkendala berbeda dari lingkungan di mana PDG dikembangkan.


2.1 Konsentrasi Kewenangan (Concentrated Authority)


Kewenangan tidak terdistribusi dan tidak ada mekanisme checks and balances yang efektif. Pengambil keputusan puncak tidak dapat dimintai pertanggungjawaban secara substantif.


Konsekuensi bagi PDG:


· Framing Governance: Hanya framing yang sejalan dengan kehendak pemegang kewenangan yang diperbolehkan.

· Option Architecture: Opsi alternatif yang dianggap “mengancam” tidak dapat diajukan.

· Deliberative Structure: Ruang bagi dissent secara formal tidak ada, atau jika ada, hanya sebagai ritual tanpa pengaruh.


2.2 Ketidakpastian Penegakan Aturan (Legal Uncertainty)


Aturan digunakan sebagai alat kekuasaan, bukan perlindungan warga.


Konsekuensi bagi PDG:


· Information Filtering: Informasi yang tidak nyaman bagi pemegang kewenangan dapat dikriminalisasi.

· Dokumentasi penalaran yang menjadi inti PDG dapat menjadi bukti yang digunakan untuk menuntut anggota organisasi.

· Tidak ada jaminan bahwa pelapor atau pembawa kabar buruk akan dilindungi.


2.3 Budaya Jarak Kekuasaan Tinggi (High Power‑Distance Culture)


Ketidaksetaraan kekuasaan diinternalisasi sebagai norma; bawahan tidak berani berbicara kepada atasan, apalagi mengkritik.


Konsekuensi bagi PDG:


· Dissent institutionalization hampir mustahil karena sanksi sosial yang sangat berat.

· Epistemic agency individu ditekan bahkan tanpa ancaman hukum langsung.

· PDG dapat dijalankan sebagai formalitas kosong (performative transparency) tanpa perubahan substansial.


---


3. Risk‑Sensitive Conditions: Klasifikasi Praktik PDG Berisiko Tinggi


Risk classifications are analytical categorizations of observed patterns, not decision guidance.


Dalam lingkungan berkendala, beberapa praktik yang dianjurkan PDG diklasifikasikan sebagai berisiko tinggi dalam kondisi tertentu. Klasifikasi ini bukan perintah “jangan lakukan”, melainkan identifikasi kondisi di mana risiko sistematis melampaui manfaat potensial.


3.1 Dokumentasi Tertulis yang Dapat Dilacak


PDG menekankan dokumentasi asumsi, alternatif, dan dissent. Dalam lingkungan dengan penegakan aturan yang tidak dapat diprediksi, dokumen tersebut dapat disita dan digunakan sebagai alat untuk menjerat.


Klasifikasi: High‑risk practice when historical patterns show use of documentation for enforcement actions.


Alternatif berisiko rendah (pola yang terdokumentasi): Cognitive documentation (proses mental tanpa jejak fisik) atau dokumentasi kolektif tanpa atribusi individu telah diamati dalam berbagai organisasi di lingkungan berkendala. All cited cases should be interpreted as synthetic composites derived from multiple empirical literatures, rather than single identifiable studies.


3.2 Dissent Formal dengan Identitas Terbuka


PDG menganjurkan perlindungan bagi penantang (structured dissent). Namun, dalam lingkungan yang menghukum perbedaan pendapat, identitas penantang yang terekspos dapat berakibat fatal.


Klasifikasi: High‑risk practice when no reliable protection for dissenting voices exists.


Alternatif berisiko rendah (pola yang terdokumentasi): Dissent by proxy (suara kolektif yang tidak dapat dilacak ke individu) atau dissent melalui jalur informal yang tidak terdokumentasi telah dilaporkan dalam studi kasus organisasi di berbagai konteks berkendala.


3.3 Transparansi Penuh ke Publik


PDG mendorong transparansi sebagai bagian dari akuntabilitas. Namun, dalam lingkungan dengan risiko sanksi tinggi, publikasi informasi tentang asumsi, alternatif, atau kritik dapat menjadi bumerang.


Klasifikasi: High‑risk practice when publication enables retaliation.


Alternatif berisiko rendah (pola yang terdokumentasi): Transparansi selektif hanya kepada pihak yang memiliki kapasitas untuk melindungi informasi (misal, jaringan internasional) atau transparansi dengan jeda waktu setelah risiko mereda telah digunakan oleh beberapa organisasi dalam situasi serupa.


---


4. Low‑Observability Epistemic Adjustments: Strategi Risiko‑Rendah


Misinterpretation Guard Clause: Any interpretation of Section 4 as a set of actionable strategies is analytically incorrect. The patterns described are equilibrium artifacts under selection, not feasible choice sets. Attempting to operationalize them is expected to increase, not decrease, exposure risk due to endogenous detection dynamics. These patterns should be interpreted as post‑hoc observational regularities under extreme selection bias, not as ex‑ante feasible options available to actors. Any resemblance between these patterns and actionable strategies is structurally misleading due to selection bias, survivorship bias, and endogenous detection. Actors attempting to replicate them should expect higher—not lower—risk exposure. Their empirical status is existential (demonstrating that such adaptations are possible) rather than frequential. Due to non‑invertibility and endogenous detection, these patterns cannot be treated as reproducible strategies. Section 4 describes observed motifs under severe selection bias; they are not selectable options within a decision set and should not be interpreted as prescriptive guidance.


Pattern Evidence Type Strength Level

Cognitive documentation Case studies (synthetic composites) Low (exploratory)

Indirect signaling Qualitative multi‑case (synthetic composites) Medium (cross‑case consistency)

Information compartmentalization Cross‑national organizational studies (synthetic composites) Medium (comparative)

Distributed responsibility structures Mixed‑methods organizational ethnography (synthetic composites) Medium‑Low (context‑dependent)

Temporal variation Longitudinal case analysis (synthetic composites) Medium (temporal consistency)


Dalam konteks risiko tinggi, PDG tetap dapat digunakan dengan modifikasi yang diklasifikasikan sebagai strategi berisiko rendah, sering disebut sebagai low‑observability epistemic adjustments atau non‑attributable governance patterns. Strategi‑strategi berikut telah terdokumentasi dalam studi empiris lintas konteks. Pola‑pola ini dijelaskan secara deskriptif berdasarkan ekstrapolasi teoretis dan perilaku organisasi yang teramati dalam kondisi tertekan, bukan sebagai panduan preskriptif.


Importantly, the framework does not predict that actors who are aware of these patterns will be able to implement them effectively. The mapping between observed patterns and actor‑level behavior is non‑invertible under high‑constraint uncertainty.


4.1 Cognitive Documentation (Tanpa Jejak Tertulis)


Semua langkah PDG dilakukan secara mental, tidak didokumentasikan. Tim dilatih untuk secara otomatis mempertanyakan asumsi, mengeksplorasi alternatif, dan mensimulasikan dissent dalam kepala mereka sendiri.


Pola yang teramati: Cognitive documentation appears in settings where formal recording carries disproportionate risk.


Risiko yang Diklasifikasikan: Sulit untuk memastikan konsistensi dan pembelajaran kolektif; tidak meninggalkan jejak yang dapat digunakan untuk tindakan penegakan.


4.2 Indirect Signaling Mechanisms


Dissent disampaikan dengan cara yang tidak dapat dipidana: menggunakan pertanyaan retoris, analogi, atau cerita dari “organisasi lain” yang mengalami kegagalan serupa.


Contoh pola yang terdokumentasi: In documented cases, participants used hypothetical analogies to raise concerns without direct attribution.


Prinsip analitis: Indirect signaling allows risk reduction through plausible deniability.


4.3 Information Compartmentalization Patterns


Jika dokumentasi mutlak diperlukan, dokumentasi dipecah menjadi fragmen dan disimpan di lokasi yang berbeda, sering kali dengan enkripsi, dan ditempatkan di luar yurisdiksi lokal.


Pola yang teramati: Information compartmentalization has been observed in multinational organizations operating across varied legal environments.


Prinsip analitis: Distributed storage reduces single‑point vulnerability.


4.4 Distributed Responsibility Structures


Alih‑alih memiliki satu individu yang bertanggung jawab, tanggung jawab didistribusikan di antara beberapa aktor, dengan rotasi peran dan pengambilan keputusan kolektif.


Pola yang teramati: Distributed responsibility structures appear in organizations seeking to reduce individual exposure.


Prinsip analitis: Diffuse responsibility reduces the expected cost of targeting any single actor.


4.5 Temporal Variation in Organizational Behavior


Inisiatif PDG dilakukan pada saat pemegang kewenangan sedang sibuk dengan krisis lain, atau ketika ada perhatian internasional yang kuat. Pola ini tercatat dalam studi tentang windows of opportunity dalam organisasi di lingkungan bertekanan tinggi.


Pola yang teramati: Temporal clustering of activities during periods of reduced enforcement attention has been documented in multiple settings.


Prinsip analitis: Temporal variation exploits fluctuations in enforcement intensity.


4.6 Non‑Invertibility Principle


A central conceptual implication of EG‑HCE is the non‑invertibility of epistemic adaptation patterns. Observed low‑observability patterns cannot be reliably reverse‑engineered into intentional strategies. Formally, let an observed pattern O be generated by an unknown mapping O = F(S, \epsilon, H), where S is latent strategy, \epsilon stochastic noise, and H unobserved heterogeneity. Because F is not injective and H is not recoverable from O, the inverse F^{-1}(O) is undefined. This structural non‑invertibility—rooted in endogenous detection dynamics, incomplete information, and path dependency—implies that predictive content is restricted to directional and distributional predictions rather than actor‑level prescriptions.


Non‑invertibility restricts predictive content to directional and distributional predictions rather than actor‑level prescriptions. Non‑invertibility does not eliminate empirical content; it restricts it to population‑level directional predictions rather than strategy reconstruction. This formulation explains why Section 4 constitutes analytical abstraction rather than operational guidance, distinguishing EG‑HCE from literatures that assume observed behaviors can be deliberately replicated. Non‑invertibility here is structural asymmetry between observation and action under endogenous detection.


---


5. Disengagement Thresholds: Kondisi untuk Menghentikan Penggunaan PDG


Tidak ada strategi yang sepenuhnya aman. EG‑HCE menyediakan klasifikasi kondisi di mana organisasi atau individu harus berhenti menggunakan PDG, karena risiko telah melampaui manfaat potensial. Dalam terminologi model, ini adalah kondisi di mana corner solution ( d = 0 ) menjadi optimal—suatu keadaan yang kami sebut Rational Epistemic Disengagement Regime (REDR). The term "regime" is used here to denote a behaviorally stable convergence pattern under repeated exposure, rather than a formally solved game‑theoretic equilibrium. It denotes a stable outcome where actors' collective adjustments lead to persistent non‑adoption, even when individual actors might theoretically benefit from deeper governance if they could act unilaterally.


Crucially, Disengagement Regime (REDR) must be strictly distinguished from extreme risk aversion. While risk aversion is a parameter of individual utility (\theta), REDR is a systemic phase transition. In REDR, the marginal epistemic return V'(d) becomes endogenously degraded by the environment's hostility, such that even for a risk‑neutral agent (\theta \to 0), the optimal depth remains zero. REDR is a property of the environment, not of the agent. It is not a failure of courage, but a rational recognition of epistemic futility—where the process of reasoning is so compromised that non‑participation becomes the only remaining logical act.


To illustrate: consider an environment where any documented reasoning automatically triggers investigation, regardless of content. In such a setting, even perfectly rational deliberation yields negative expected value because the act of reasoning itself exposes the organization. Non‑adoption becomes the only truth‑preserving action.


Proposition (REDR ≠ Risk Aversion).

If there exists an environment such that V'(d) < 0 for all d > 0 (Epistemic Collapse), then d^* = 0 for all \theta \ge 0. This demonstrates that non‑adoption can be independent of risk preference. This condition is expected to be rare and context‑specific, but analytically important as a limiting case.


REDR is not defined by absence of governance, but by systematic erosion of marginal epistemic returns observable through declining error‑correction despite increased stakes. This means that in environments where REDR occurs, we should observe that even when the consequences of error grow larger, organizations do not respond by deepening governance; instead, they disengage, because the expected value of deeper reasoning has become negative.


Dimension, Extreme Risk Aversion, dan Rational Epistemic Disengagement Regime (REDR):

  • Cause: Fear of consequences (C) --> Degradation of epistemic benefit (V)
  • Nature: Psychological / Individual --> Structural / Systemic
  • Reversibility: Instant if threat removed --> Slow; requires trust reconstruction
  • Actor Status: Avoids to survive --> Withdraws because reasoning is futile
  • Dependence on \theta: Positive (higher \theta → shallower depth) --> Can occur even when \theta \to 0


5.1 Tanda‑tanda Bahaya (Klasifikasi Risiko Tinggi)


Indikator dan Klasifikasi Risiko:

Anggota tim mulai diinterogasi tentang proses pengambilan keputusan: High‑risk signal: attention from enforcement actors detected

Dokumen disita atau diminta untuk diserahkan: High‑risk signal: documentary trail weaponized

Ada anggota yang dipecat atau dikenai sanksi karena menyuarakan pendapat berbeda: High‑risk signal: dissent subject to sanction

Teknologi komunikasi mulai diawasi secara intensif: High‑risk signal: digital surveillance escalated

Mitra eksternal mulai menarik diri karena tekanan: High‑risk signal: external protection weakened


5.2 Observed Responses (Pola yang Terdokumentasi)


1. Reduction in observable decision artifacts: Dalam kondisi tekanan tinggi, terjadi penurunan dokumentasi keputusan yang teramati sebagai respons terhadap peningkatan sinyal penegakan.

2. Shift to survival mode: Fokus bergeser dari optimalisasi kualitas keputusan ke perlindungan anggota dan keberlanjutan organisasi.

3. Role reassignment or geographic redistribution of personnel: Anggota yang terekspos dialihkan ke peran yang tidak terekspos atau ke lokasi dengan risiko lebih rendah.

4. Archival documentation: Satu salinan dokumentasi disimpan di luar yurisdiksi lokal dengan akses terbatas untuk pembelajaran masa depan setelah kondisi membaik.


---


6. Model Formal dengan Parameter Risiko yang Dapat Diestimasi: Epistemic Risk‑Adjusted Governance Model (ERGM)


Untuk mengilustrasikan logika keputusan dalam lingkungan berkendala, kami menyajikan model formal yang dapat diestimasi secara empiris. Model ini disebut Epistemic Risk‑Adjusted Governance Model (ERGM). ERGM mengabstraksi dari sistem politik nyata dan harus diinterpretasikan sebagai kerangka utilitas‑risiko yang digeneralisasi, yang berlaku untuk setiap lingkungan tata kelola dengan ketidakpastian tinggi. Unlike standard models where risk is attached to action outcomes, ERGM introduces process‑generated endogenous exposure, making epistemic refinement itself a risk‑generating activity (Process‑Endogenous Risk, PER). This new category of risk is orthogonal to both outcome risk (the probability of wrong decisions) and action risk (the probability of sanctions from actions). Even correct decisions and compliant actions can generate exposure solely through the structure of reasoning. The model is intentionally minimal and not structurally identified; its purpose is directional falsification, not parameter recovery. While full structural identification is infeasible, EG‑HCE yields sign‑restricted testable implications that are robust to functional form assumptions.


The primary unit of analysis is the organization, with decisions aggregated at the organizational level and micro‑foundations at the individual level.


Figure 1: Core ERGM Graph – Utility as a Function of PDG Depth


```

Utility

  ^

  |       /‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾‾\

  |      /                               \

  |     /                                 \

  |    /     V(d) (Epistemic benefit)      \

  |   /                                     \

  |  /                                       \

  | /                                         \

  |/______________________                   \

  |                      \                   \

  |                       \_                  \

  |                         \_                \

  |                           \_              \

  |                             \_            \

  |                               \_          \

  |                                 \_        \

  |                                   \       \

  |                                    \      \

  |                                     \     \

  |                                      \    \

  |                                       \   \

  |                                        \  \

  |                                         \ \

  |                                          \ \

  |                                           \ \

  |                                            \ \

  |                                             \|

  +----------------------------------------------------> PDG depth (d)

  0                     d*                    1

```


Utility  U(d) = V(d) - \theta p(d) C . The optimal depth  d^*  occurs where the gap between epistemic benefit and risk‑adjusted cost is maximized.


6.1 Asumsi dan Definisi


The primary unit of analysis is the organization, with micro‑foundations at the individual level.


·  d  = kedalaman PDG (dari 0 hingga 1).

·  V(d)  = manfaat epistemik yang diharapkan (skala 0–1). Kami mengasumsikan bahwa  V(d)  awalnya meningkat dengan  d  karena deteksi kesalahan yang lebih baik, eksplorasi opsi yang lebih luas, dan reduksi framing bias, konsisten dengan literatur tentang kualitas deliberasi. Namun, pada kedalaman yang sangat tinggi,  V(d)  dapat menunjukkan diminishing returns atau bahkan menurun (inverted‑U) karena over‑deliberation, biaya koordinasi yang meningkat, atau strategic silence dalam struktur formal yang sangat kaku. This assumption reflects an empirical regularity observed in structured deliberation experiments and hidden‑profile research: group decision accuracy improves with information sharing up to a point, after which coordination costs, communication load, and strategic silence reduce net epistemic gains (see Stasser & Titus, 1985; Schulz‑Hardt et al., 2006; the hidden profile problem literature).


Micro‑foundation of  V(d) . The shape of  V(d)  is grounded in three mechanisms from the hidden‑profile and group decision literature:


· Information sharing: Early increases in d improve the probability that unshared information enters deliberation, reducing hidden profile bias and enhancing decision accuracy. This yields  V'(d) > 0  for low d.

· Coordination costs: As d increases, the overhead of structuring deliberation (e.g., formal procedures, documentation) grows, eventually exceeding marginal informational gains. This produces diminishing returns.

· Strategic silence and preference falsification (Kuran, 1995): In high‑constraint environments, deeper governance may trigger withholding of truthful information, leading to  V'(d) < 0  beyond some threshold—or even from the outset if the environment is toxic (\alpha < 0). This micro‑foundation links  V(d)  directly to observable group processes.


We explicitly reject the naive assumption that  V(d)  is always positive or increasing. In environments characterized by extreme preference falsification (Kuran, 1995) or systematic misinformation, the baseline epistemic value may be zero or even negative (V'(d) < 0 for all d). To capture this, we introduce an epistemic hostility parameter \alpha such that V(d) = \alpha \cdot f(d), where \alpha > 0 indicates supportive environments, \alpha = 0 indicates indifferent environments, and \alpha < 0 indicates toxic environments. The baseline condition used in comparative statics assumes \alpha > 0 and f(d) increasing on [0,\tilde{d}].


To ensure testable content and prevent over‑flexibility, we impose a discipline constraint: for empirical implementation, V(d) is restricted to the class of single‑peaked concave functions with bounded support. A standard example is the quadratic form V(d) = a d - b d^2 with a, b > 0, which yields an inverted‑U shape with a single interior maximum. This retains the possibility of diminishing returns and eventual decline while providing a bounded functional form for estimation. The model does not require V(d) to be globally increasing; it only requires a local region where V'(d) > 0 for identification.


·  p(d)  = probabilitas deteksi oleh otoritas penegak. EG‑HCE core predictions apply specifically to observability‑sensitive regimes where formal records or participation are the primary detection channels. In such regimes,  p(d)  is increasing in d. In legitimacy‑signaling regimes where formalization reduces suspicion,  p(d)  may be non‑monotonic or decreasing—a scenario that generates qualitatively different predictions and is treated as an extension rather than the baseline. The framework therefore distinguishes two worlds:

    World 1 (Enforcement‑sensitive):  p'(d) > 0  — EG‑HCE predictions hold as stated.

    World 2 (Legitimacy‑signaling):  p'(d) \le 0  — the model yields inverted predictions, and deeper governance may be protective.

    Accordingly,  p(d)  should be interpreted as a partial function capturing only the marginal contribution of PDG‑related observability to detection probability.


EG‑HCE is a conditional theory, not a universal one. Its predictions are sign‑conditional on  p'(d)  and the enforcement regime. This conditionality is a strength: it defines clear scope conditions rather than overreaching.


·  C  = konsekuensi jika terdeteksi.

  p(d) = \frac{1}{1 + e^{-k(d - d_0)}}

  dengan  k > 0  (sensitivitas deteksi) dan  d_0  (titik tengah).

·  \theta  = risk aversion coefficient, interpreted as aggregate organizational risk tolerance.

·  R(d)  = risiko penegakan yang diharapkan =  p(d) \cdot C .


Process‑Endogenous Risk (PER). The incremental detection probability attributable to epistemic structure is defined as:


\text{PER}(d) = p(d) - p_0


where p_0 is baseline detection probability when no governance depth is applied (d=0). PER captures the risk generated by the reasoning process itself, distinct from risks associated with decision outcomes or compliant actions.


Mikro‑fondasi perilaku: Kami mengasumsikan bounded Bayesian updating dengan struktur jaringan sosial terbatas. Aktor memperbarui keyakinan berdasarkan sinyal penegakan yang teramati dan hasil dari rekan jaringan terdekat, dengan batasan kognitif seperti recency bias dan confirmation bias.


6.2 Fungsi Utilitas


U(d) = V(d) - \theta \cdot R(d) = V(d) - \theta \cdot p(d) \cdot C


6.3 Competing Model: Naive Governance Model (NGM)


We contrast EG‑HCE with a naive alternative that ignores enforcement risk:


U_{NGM}(d) = V(d)


Under NGM, optimal depth is the epistemic optimum regardless of enforcement context. A decisive test: exogenous increase in enforcement risk should reduce depth under EG‑HCE but not under NGM.


6.4 Parameter Lingkungan


Model dapat diperluas dengan parameter lingkungan:


·  s_1  = intensitas pengawasan – mempengaruhi  k .

·  s_2  = ketidakpastian penegakan aturan – mempengaruhi  C .

·  s_3  = indeks jarak kekuasaan budaya – mempengaruhi  V(d) .


Maka:


U(d \mid \mathbf{s}) = V(d \mid s_3) - \theta \cdot p(d \mid s_1) \cdot C(s_2)


6.5 Comparative Statics dan Epistemic Risk Dominance Region


The model generates clear directional predictions:


\frac{\partial d^*}{\partial s_1} < 0, \quad \frac{\partial d^*}{\partial C} < 0, \quad \frac{\partial d^*}{\partial \theta} < 0


When  U(0) > U(d)  for all  d>0 , non‑adoption ( d=0 ) becomes optimal—the Rational Epistemic Disengagement Regime (REDR).


EG‑HCE identifies a structural region in which marginal epistemic improvement is systematically harmful across the entire feasible domain: the Epistemic Risk Dominance Region, defined by:


\theta p'(d) C > V'(d) \quad \text{for all } d > 0


In this region, any increase in governance depth reduces expected utility, making non‑adoption the unique optimum.


Theorem (Risk Reversal).

Under an enforcement‑sensitive regime (p'(d) > 0), if for all d > 0 the inequality \theta p'(d) C > V'(d) holds, then the unique optimal governance depth is d^* = 0.

Proof: From U(d) = V(d) - \theta p(d)C, we have U'(d) = V'(d) - \theta p'(d)C < 0 for all d>0. Hence U(d) is strictly decreasing on (0,1], and U(0) > U(d) for all d>0. ∎


Core prediction of EG‑HCE: Under sufficiently high enforcement risk, epistemic improvement becomes a liability rather than an asset. In sufficiently high‑constraint environments, improvements in epistemic process quality can increase expected organizational harm, reversing the standard governance assumption that better reasoning is always beneficial.


EG‑HCE shows that under enforcement‑sensitive environments, epistemic processes themselves become sources of risk, potentially reversing their expected value. This reveals the Epistemic Self‑Exposure Problem: the very structures intended to improve reasoning create traceable artifacts that invite detection, making deeper governance a liability under constraint.


6.6 Empirical Anchor Scenario (High‑Confidence Test Case)


A natural setting for testing EG‑HCE is a panel of civil society organizations subject to an exogenous enforcement shock, such as a regulatory crackdown. For example, multi‑country NGO data before and after a shift in legal enforcement (e.g., 2015–2022) can be used to examine whether:


· Governance depth (measured by documentation volume, structured dissent protocols) declines in the post‑shock period in countries with high enforcement sensitivity.

· Organizations in countries where formalization signals legitimacy (e.g., where legal requirements protect rather than target) do not exhibit the same decline, or show an increase.


This design allows testing the core prediction that enforcement risk compresses observable epistemic depth, while controlling for confounding factors through difference‑in‑differences.


6.7 Stylized Case: Enforcement Shock in a Human Rights Organization


A transnational human rights organization, employing full PDG (d \approx 0.8), faced a sharp increase in surveillance. In response, formal documentation was minimized, dissent moved to non‑attributable channels, and depth dropped to d \approx 0.2. Staff retention improved. This illustrates adaptation without normative claim; survivorship bias may overrepresent successful low‑visibility configurations.


6.8 Minimal Observable Implication


Even if strategies are non‑invertible, EG‑HCE predicts an observable compression of epistemic artifacts—documents, dissent traces, recorded deliberation—as enforcement risk increases. This compression is directional and population‑level: across organizations exposed to rising risk, the aggregate volume of formal reasoning artifacts should decline, and the content should shift toward less traceable forms. This implication is directly testable using archival or survey data.


6.9 Measurement Proxies


Konstruk, Proksi Empiris, dan Sumber Data:

  •  V(d)  (Epistemic Value): Forecast accuracy, reversal rate, error detection (Data internal organisasi; MVEP) 
  •  p(d)  (Detection Probability): Audit frequency, data requests, retaliation incidents (Survei; laporan perlindungan) 
  •  C  (Cost of Detection): Sanction probability, funding loss, dissolution risk (Data historis; indeks lingkungan) 
  •  s_1  (Surveillance Intensity): Digital surveillance index (Indeks kebebasan digital) 
  •  s_2  (Legal Uncertainty): Rule of Law index, CPI (WGI; CPI) 
  •  s_3  (Power Distance) Hofstede PDI; autonomy indices (Hofstede Insights; survei) 


These proxies are directional indicators, not precise measurements.


6.10 Minimum Viable Empirical Test: Vignette Experiment


A vignette‑based survey experiment is the primary empirical strategy. Participants are randomly assigned to low vs high enforcement risk conditions, evaluate identical decision scenarios, and indicate preferred governance depth. Predictions: lower depth, lower willingness to document dissent, and higher preference for non‑attributable reasoning under high risk.


A promising empirical domain is regulatory crackdowns on civil society organizations, where exogenous shifts in enforcement intensity provide natural experimental variation. Such settings allow testing of whether governance depth declines in response to increased surveillance, as EG‑HCE predicts.


The primary empirical contribution of EG‑HCE is not parameter estimation but directional falsification through observable compression of epistemic artifacts. This focus on directional shifts rather than precise magnitudes makes the framework robust to measurement noise and suited for contexts where structural identification is infeasible.


6.11 Operationalizing  p(d) 


Regime, Mechanism, dan Observable Proxy:

  • Artifact‑Based (Detection scales with documentation volume) : Pages, reports, filings
  • Network‑Based (Risk exponential in participants) : Group size, presence of multiple units
  • Signal‑Based (Threshold triggered by sensitive topics) : Prohibited topics, keywords


6.12 Epistemic Observability Frontier


The Epistemic Observability Frontier captures the trade‑off between epistemic depth and observability exposure. In high‑constraint environments, the frontier compresses, forcing shallower depths or detection.


6.13 Null Prediction


To avoid overgeneralization and sharpen testability, EG‑HCE yields a clear null prediction: in low‑constraint environments (where enforcement risk is negligible,  \theta p(d)C \approx 0 ), there is no systematic relationship between governance depth and enforcement exposure; observed depth will be determined solely by epistemic benefits V(d). This distinguishes EG‑HCE from models that predict universal risk‑sensitivity.


---


7. Strategi Identifikasi untuk Pengujian Empiris


· Difference‑in‑Differences with external enforcement shift.

· Natural experiment (leadership change altering enforcement).

· Instrumental variable (donor exposure, with caveat on exclusion).

· Vignette experiment as primary low‑risk test.


A high‑resolution identification design would exploit sudden legal classification changes that retroactively alter the risk associated with identical documentation practices, allowing within‑organization comparison before and after classification. Such changes—for example, when a previously routine reporting requirement becomes legally hazardous—provide exogenous variation in p(d) while holding d constant, enabling cleaner estimation of the effect of enforcement risk on governance choices.


These strategies are approximations for bounded empirical testing, not definitive causal identification. EG‑HCE is a structurally disciplined theoretical framework with bounded empirical testability.


7.1 Uji Falsifikasi


· P1: Positive correlation between written PDG documentation and sanctions in high‑surveillance contexts.

· P2: Cognitive documentation users have higher survival rates.

· P3: Risk differs between distributed and centralized structures.

· P4: PDG use increases during high‑international‑attention periods.

· P5 (Negative): In legitimacy‑signaling regimes, formalization reduces detection.

· P6: The variance of documentation practices across similar organizations increases following enforcement shocks, reflecting heterogeneous movement toward low‑observability adaptations. This prediction captures the dispersion of governance responses under uncertainty.


7.2 Model‑Level Falsification


· Falsif 1:  p(d)  not increasing with d in enforcement‑sensitive regimes.

· Falsif 2: No relationship between d and V(d).

· Falsif 3: \theta does not affect adoption.

· Falsif 4: Non‑adoption equilibrium never observed under high risk.

· Falsif 5: Depth increases under higher enforcement exposure.


---


8. Keterkaitan dengan Arsitektur ABUWT


EG‑HCE melengkapi PDG dengan lensa konteks. Dalam ekosistem ABUWT, EG‑HCE dapat ditempatkan sebagai adaptasi untuk lingkungan berkendala tinggi (Level 6).


---


9. Conceptual Boundary with Strategic Behavior Literature


EG‑HCE is analytically orthogonal to strategic compliance, hidden transcripts, and institutional evasion literatures. Its object is epistemic process quality under enforcement risk, not symbolic conformity, off‑stage resistance, or rule circumvention.


· Strategic compliance: focuses on decoupling; EG‑HCE models utility under uncertainty, not symbolic conformity.

· Hidden transcripts: explain expression under domination; EG‑HCE explains deformation of decision quality under risk.

· Institutional evasion: focuses on circumvention; EG‑HCE anticipates when formal governance becomes counterproductive.


---


10. Agenda Riset dan Falsifiability


1. Comparative case studies

2. In‑depth interviews

3. Agent‑based simulations

4. Toolkit development for risk zones (Green/Yellow/Red)


Figure 2: Risk Zones for PDG Application


```

Risk Zone        Characteristics                               Implication for PDG

──────────────────────────────────────────────────────────────────────────────────────────

Green Zone  Low enforcement risk; stable legal environment   Full PDG recommended

                   Low power distance; independent judiciary        with standard depth (d≈0.5)

──────────────────────────────────────────────────────────────────────────────────────────

Yellow Zone  Moderate surveillance; uncertain enforcement    Selective adaptation:

                         but some protection exists; hierarchical          low‑observability patterns,

                         culture with limited dissent tolerance              reduced depth (d≈0.2–0.3)

──────────────────────────────────────────────────────────────────────────────────────────

Red Zone         High surveillance; arbitrary enforcement;        Disengagement advised

                          no protection for dissent; extreme power        (d=0); prioritize survival

                          distance or active persecution

```


10.1 Failure Modes of EG‑HCE


EG‑HCE dapat gagal dalam memberikan analisis yang akurat jika terjadi salah satu kondisi berikut:


1. False Sense of Safety – Aktor merasa aman karena menerapkan strategi low‑observability, tetapi otoritas tetap dapat mendeteksi melalui metode pengawasan lain yang tidak diperhitungkan dalam model (misal, informan, analisis jejaring sosial).

2. Co‑optation by Authority – PDG digunakan oleh pemegang kewenangan untuk menciptakan ilusi transparansi (performative transparency) tanpa perubahan substantif, sehingga strategi adaptasi tidak efektif karena otoritas justru mengadopsi praktik tersebut sebagai alat legitimasi.

3. Cognitive Overload – Cognitive documentation terbukti tidak dapat diskalakan; organisasi dengan keputusan kompleks tidak mampu mempertahankan konsistensi penalaran tanpa dokumentasi.

4. Network Exposure Risk – Distributed responsibility structures justru memperluas jejak sosial yang dapat dilacak, sehingga risiko individu tidak berkurang secara signifikan.

5. External Protection Collapse – Ketika perlindungan eksternal (misal, donor internasional, liputan media global) runtuh secara tiba‑tiba, strategi yang sebelumnya dianggap aman menjadi berisiko tinggi.


Pengakuan atas failure modes ini memperkuat klaim bahwa EG‑HCE adalah kerangka analitis yang realistis, bukan solusi ajaib.


10.2 Scope Conditions


Agar tidak terjadi over‑generalization, EG‑HCE menyatakan kondisi di mana kerangka ini paling (dan kurang) berlaku.


EG‑HCE paling berlaku ketika:


· Terdapat struktur organisasi minimal (bukan individu yang bekerja sendirian).

· Ada otonomi keputusan terbatas (organisasi masih memiliki ruang untuk memilih strategi).

· Tidak dalam kondisi disintegrasi total (misal, konflik terbuka skala besar).

· Tersedia data minimal tentang intensitas pengawasan dan sanksi.

· The framework is expected to exhibit variation in applicability across sectors, particularly between public, private, and hybrid organizations, due to differences in legal exposure, resource availability, and institutional logics.


EG‑HCE kurang berlaku ketika:


· Organisasi beroperasi dalam kondisi disintegrasi total atau pembubaran paksa massal.

· Tidak ada struktur organisasi yang tersisa.

· Individu tidak memiliki otonomi untuk memilih strategi (keputusan sepenuhnya ditentukan oleh paksaan eksternal).


10.3 Boundary Failure Case


When detection probability p(d) is non‑increasing (e.g., legitimacy‑signaling regimes) or epistemic benefits are extremely convex, deeper governance may remain optimal even under high risk. This nuance prevents overclaim.


---


11. Kesimpulan: Dari Universalitas Naif ke Kesadaran Konteks


EG‑HCE menawarkan peta jalan realistis: risiko rendah → PDG penuh; risiko sedang → adaptasi low‑observability; risiko tinggi → disengagement.


EG‑HCE reveals the Epistemic Self‑Exposure Problem: in enforcement‑sensitive environments, the very processes designed to improve reasoning create traceable artifacts that invite detection, turning epistemic depth into a source of risk. This framework shows that better reasoning is not always better—it can become systematically harmful when the process itself generates exposure.


EG‑HCE is not about what actors do under constraint, but about when reasoning itself ceases to be beneficial. It contributes a complementary shift to the governance discourse: from optimizing process to understanding the conditions under which process inverts its own value.


Seperti yang tertulis dalam dokumen asli CAA:


“Biarkan waktu yang menilai apakah gagasan ini akan berguna. Saat ini, ia hanya diletakkan di sini, menunggu ditemukan oleh mereka yang membutuhkan.”


EG‑HCE is not a prescriptive guide for action, but an analytical framework to understand constraints on epistemic governance under varying levels of institutional risk.


---


Lampiran: Tabel Ringkasan Klasifikasi Risiko


Kondisi, Alternatif Berisiko Rendah (Pola Terdokumentasi), dan Kondisi Disengagement:

  • Dokumentasi dapat digunakan sebagai alat penegakan (Cognitive documentation; information compartmentalization): Dokumen disita atau diminta untuk diserahkan
  • Dissent dapat menyebabkan sanksi (Indirect signaling; distributed responsibility): Anggota dikenai sanksi karena menyuarakan pendapat
  • Otoritas mengawasi komunikasi digital (Face‑to‑face interaction; coded communication): Teknologi komunikasi diawasi intensif
  • Mitra eksternal menarik diri (Strengthen smaller local networks): Perlindungan eksternal runtuh sepenuhnya
  • Tidak ada jaminan perlindungan hukum (Rotating roles; collective decision structures): Ada anggota yang dipecat atau dikenai sanksi

---


Accountability‑Based Universal Wisdom and Trust

Cross‑Sector Pre‑Decision Governance Translator


Maret 2026


📧 Kontak: tpapgtk@gmail.com

📄 Lisensi: CC BY‑NC‑SA 4.0

🌐 Arsip: https://abuwt.blogspot.com