What I want to focus on is how the the Bernoulli Fallacy results in a systematic error whereby confidence in the assessment of cause and effect relationships is erroneously inflated with disastrous implications for strategic decision-making. To illustrate this point, let us consider how this error manifests within the Cynefin framework.
For the uninitiated, the Cynefin framework offers an optimal approach to goal seeking behavior based on how well cause and effect relationships are understood. When we understand that cause and effect relationships can only be ascertained in retrospect in a given context, we recognize that we are in the Complexity quadrant, and can follow the probe, sense, respond approach to make effective decisions. In other words you try stuff, see how it goes, and adjust as necessary. As understanding increases it is possible to shift quadrants and change approaches. What the Bernoulli Fallacy facilitates is confidence that you’re in say, the Complicated or Obvious quadrants when you’re actually in Chaos or Complexity. This locks the managerial class as a whole into taking systemically obtuse approaches insofar as we assess their behavior teleologically. This begs the question cui bono? As always when it comes to the seemingly intractable problems of modernity, it is the central bank owners and their lackeys. Widespread confusion and incompetence is a side effect of how the elite wield the scientific establishment to produce such systematic errors in any direction they want. When a wee p is enough for people to assume a causal relationship, you just need to feed money and confer status upon individuals and institutions that produce the results you want. They don’t even need to be in on it. If you fund hundreds of studies on SSRI’s, for example, some will have a wee p, those papers will see the light of day, and the other results will fade into oblivion thanks to publication bias.
This issue touches pretty much all of modernity, but I’ll use a specific example to illustrate. McKinsey & Company did some studies and found a wee p that led countless managerial executives to assume adding minorities to the C-Suite had a causal relationship with increased profits. They then assumed they were in the Complicated quadrant, instituted DEI programs, and destroyed the competence of their organizations (at least teleologically, they can still look profitable thanks to continuous double digit inflation, but that’s another story). The hypothesis that adding minorities and women to C-Suites will increase profits is kind of retarded on its face, but I have enough epistemic humility to acknowledge that there are contexts where this could be the case. The problem is, they didn’t think it might work. That is something you think when you know you’re in the Complexity quadrant. No, thanks to their wee little p’s they knew it would work. This explains why there was no effort to sense after the programs were implemented. The sensing already happened (the McKinsey studies). All that was left to do was analyze and respond (by instituting DEI programs). It didn’t matter that the McKinsey studies failed to replicate nor that the programs didn’t produce any results in the few places they even tried to measure ROI. They had a belief born of a fallacy advanced by a system whose ultimate purpose is not to elucidate cause and effect relationships, but to exploit and parasitize us.
What to do? Keep the replication crisis in mind and don’t take wee p values as evidence of anything. If you do need to inform decisions based on existing scientific literature, don’t bother with the p values, just look at the mean effects and sample sizes. Large samples with effects that seam important to you between groups replicated across multiple studies is informative, especially if it is associated with a plausible mechanism. Recognize the entire construct of “statistical significance” isn’t just dumb, it is based around a fallacy that is employed to deliberately deceive you. The ultimate goal of this deception is, at best, to degrade the quality of your decisions. At worst, the goal is to encourage you to actively work against your own interests in the infinite game.
What's interesting to me about the Cynefin framework is that, while these are all loop statements, sense is prioritized on the right half and not the left. What's weird is that sense (i.e. sensory information gathering) is simultaneously both the most useful heuristic and the most goal-destructive one if the sensory data and/or collection methods are somehow corrupted. "Something just smells wrong about this" could be intuition functioning as it should, or it could be that you've been inundated by so many complex lies that you're doing your own wee p analysis without even knowing it.
I think that's why "common sense" gets remediated as secondary axioms, stories, parables, etc. If something seems "obvious", we have a long-established record of other obvious, just-so, self-evident truths to fall back on for meta-analysis. Trying to start from scratch procedurally with every novel problem might perform well in the lab, but not the wild.