David Robert Grimes has a new paper, On the Viability of Conspiratorial Beliefs, in which he describes a simple model of conspiracies over time, specifically, a model of how long it would take until the secret leaks out. In general, his model reflects the obvious fact that the more people involved, the harder it is to keep a secret, and the sooner a secret will “get out”.
It isn’t clear whether such reasoning, either formal or informal, actually makes sense.
Grimes is drawn to the topic because of cases of popular beliefs in “conspiracies” which undermine important scientific understandings. He imagines that using this model to demonstrate the implausibility of large scale conspiracies “might be useful in counteracting the potentially deleterious consequences of bogus and anti-science narratives”. I have my doubts that any kind of logical reasoning will do any good.
Overall, the model formalizes what is probably a common intuition, using arguments based only on the number of people who must keep the secret: it is unlikely that a secret conspiracy among a large number of people can remain secret for long. Sooner or later, someone will goof, defect, or be uncovered.
This model is illustrated by the fairy tale that NASA faked the moon landing. (Nevermind that more people know about the supposed conspiracy than about the actual event.) This conspiracy must postulate that thousands of scientists and other workers must all knowingly or unwittingly have upheld the lies, and have done so for decades. Aside from any other doubts (such as what the motive for such a conspiracy might be), it is difficult to imagine that no one would have spilled the beans by now.
Grimes’ model formalizes this intuition, and gives a simple model that suggests that such a secret could not be kept for more than a few years.
Grimes’ model simplifies the situation, of course. This is what a model is supposed to do.But it is valuable in that it made me think about what he leaves out, and if there are important refinements to consider. In fact, I think there are two important refinements that would be important to think about, and possible add to extend the model.
First, the model treats all groups the same, averaging over the motivations, resources, and structure of the alleged conspirators. So, a group of a few dozen scientists at the FBI—who know each other and have common goals—are compared to 10,000 climate scientists all over the world, who have not met, and who are competitors. Obviously, these two groups differ by more than sheer numbers.
I note that even large agencies such as intelligence services keep secrets, including conspiracies, by compartmentalizing information, so that most of the people know only parts of the conspiracy. This contrasts to, for example, large public health agencies, where information is widely shared and published.
Hand in hand with information management; camouflage, concealment, and disinformation are employed to cover the conspiracy with a cloud of uncertainty and, yes, fake conspiracy stories. Even the conspirators themselves may believe some of the cover stories.
Basically, these kinds of measures means that Grimes’ notion of “the number of conspirators”, i.e., people who know and might leak, glosses over the range of knowledge that individuals might have. Perhaps his “N” needs to be adjusted for the distribution of knowledge among the conspirators. Perhaps a few top spies know everything, the many foot soldiers each know only a tiny bit of the picture, so the “effective N” is much smaller than the raw number of people.
The second simplification that may be too simple is Grimes’ implicit definition of “exposure”. He adopts a binary step function, in which the conspiracy is either “believed” or “disbelieved”, based on whether it has been “revealed” or not. As far as I can see, his metric for a conspiracy being “generally believed” is that “everyone knows” it is true (or false). His examples focus on cases where mainstream media have become aware and publicized a story—either true or false. In these cases, the not terribly reliable mass media are taken to reflect “general belief”.
Other cases he discusses, such as the NASA moon landing story, are more clearly folktales, which, I’m very sorry to tell you, are not actually motivated by a desire for objective truth. The NASA story is about government deception and “those scientists” who, for their own reasons, seek to fool “us”. It’s not about what really happened, it’s about what “people like us believe”.
My main point here is that “belief” in a conspiracy isn’t so simple. It is often the case that some people believe a specific narrative, while others don’t (and some might think it is a morally valuable tale, regardless of “truth”). A conspiracy is unmasked when the proportions of belief and disbelief shift. It is rarely the case that absolutely everyone believes in a conspiracy one day, and the next day no one does.
Furthermore, this is actually a case of signal detection on a noisy channel: the story might or might not be real, and we must decide to believe or not. There are four cells in this table, true negative (we believe there is no conspiracy, and there is none), true positive (we believe there is a conspiracy, and there is one), false negative (the real conspiracy is successfully concealed, because we don’t believe in it), and false positive (we believe in a non-existent conspiracy).
Information about the conspiracy may shift people’s belief among these four boxes, as we decide to believe or disbelieve. Grimes’ model is about one kind of especially convincing information, a first hand leak or confession, which would move many people into true positive or true negative. He is particularly concerned with cases where large numbers of people are in the “false positive” state, which he would hope to change to “true negative” in the absence of a leak–by the fact that no one has confessed or leaked.
If that paragraph doesn’t confuse you, you aren’t reading carefully!
The main point is that his model of “belief”, and of how leaks may affect it is pretty confusing when you look at it carefully. In fact, his overall point is to make some argument that belief in a particular conspiracy should be dropped because, if it existed, we would know about it by now. But if I already think we do know about it, then how is this argument relevant?
In the end, of course, this is all pretty irrelevant. Popular conspiracy theories are folk stories that convey attitudes about power and identity. “They” are lying to us, and “we” know better. And particular narratives become important symbols for identity with a particular “we”. No amount of logic matters to this “we vs they” narrative. In fact, many times people will actually say, “it doesn’t matter if it really happened, it should have happened”.
- David Robert Grimes, On the Viability of Conspiratorial Beliefs. PLoS ONE, 11 (1):e0147905, 2016. http://dx.doi.org/10.1371%2Fjournal.pone.0147905