You are on page 1of 7

On On the Historicity of Jesus:

Why We Have Reasons to Doubt Richard Carriers Historiography


No, Im not just writing this to pick on Dr. Richard Carrier (thats just a perk; anybody who
thinks they can resolve a ~300 year debate in the mathematics community in a book on historical
methods is either just about the greatest mathematical mind the world has ever seen or doesnt
understand math very well but doesnt let that stop him from some extravagant claims). Its
because I get to review research (something that I dont do enough of here) while talking about
fundamental tools and methods in the sciences. In fact, because Ive reviewed at least one of
Carriers works before, this review will be much more focused. It is the first of 2 or 3 criticisms I
have with On the Historicity of Jesus, not counting any of the issues relating to historical Jesus
research or even problems with Carriers historical analysis. You may well ask, How can you
review historical scholarship without touching on its historical analysis? Boy, I am sure glad
you asked. Dr. Carrier has been kind enough to base his entire approach (to all of historical
research) on his misunderstanding of Bayes Theorem (BT), which he has supplemented with
misuse of logic and a more general ignorance of statistics and probability. As a result, I need
only show that the foundations of his entire argument (and method, as outlined in a sort of
methodological prequel Proving History. And to do this, I get to introduce essential components
of logic, probability, and Bayesian statistics/methods (though not all in this post).
Dr. Carriers probability space and real probability spaces:
Carrier has set himself up to be knocked down so thoroughly that, whereas normally Id be
saying for example or to illustrate, my first example is the foundation for his entire work.
On p. 30 (whence comes the scanned image below), Carrier reiterates how he intends (and
indeed already has) appropriately classed together all possible hypotheses regarding Jesus
existence:

Now we get to the good stuff: Bayes Theorem, Bayesian inference, and probability. A key issue
that runs through both On the Historicity of Jesus and Dr. Carriers previous book Proving
History is the confusion between Bayes Theorem (BT) and Bayesian inference/probability/etc.
The Bayesian vs. Frequentist debate is highly interesting, quite important, and far too
complicated to touch on here. Luckily, both camps agree that Bayes Theorem proper is
mathematically sound, and is derivable from probability axioms both sides accept. Its actually
rather mundane. To introduce it, I need to say a word about conditional probability.
More than a word on conditional probability:
There are different ways that we evaluate combined probabilities. For example, there is
independence: if I flip a fair coin and get heads, this tells me nothing about what I will get on the
next flip. However, if I draw an ace from a deck of cards and dont put it back (dont replace it)
before drawing again, Ive changed the odds that I will get an ace (there are fewer cards in the
deck and fewer aces). Conditional probability concerns outcomes or events that
arewellcondition. For example, imagine I pick a student at random from the pool of
graduate students currently enrolled at Columbia University. It turns out the student I picked is
almost finished with her doctoral work in ancient history. I find out that her undergraduate
degree was mathematics. This is so interesting that I start sampling students from Columbia
Universitys graduate history program to find out the probability that given the student is a
graduate history student, what is the probability that the student majored in math as an
undergraduate?

A cooler example: suppose the Man in Black challenges me to a battle of wits for the princess to
the death. I pour us wine, and he takes out a small tube of powder saying Inhale this but do not
touch. I smell nothing. He tells me What you do not smell is called iocaine powder; it is
tasteless, odorless, dissolves instantly in liquid, and is among the more deadly poisons known to
man. Next he takes the glasses, turns his back to do something with them so that I cant see, and
then puts them back on the table. I have to determine where the poison is, and I choose the glass
that was in front of him before I switched glasses when his back was turned. I start laughing at
him, thinking Ive outwitted him, but then die. The Princess Bride (Buttercup) has been sitting
with us the whole time. Knowing that I switched the glasses and yet drank the poison, she
reasons that it must be because all this time it was the Man in Blacks cup that was poisoned.
True, but she still needs a probability theory refresher. Given the fact that I died from being
poisoned, we know the cup I drank from was poisoned. But what if I hadnt switched the glasses
but still drank the glass in front of me? It turns out given this, I still would have died. The point
is that Buttercup can infer from the fact that I died from drinking the cup I did that it was
poisoned, but not that this was the cup that was poisoned: both were.
From conditional probability to BT:
BT is merely an extension of this kind of probability. Let A be a conditional outcome that must
happen with one and only one of the outcomes B1, B2, B3,B25. BT allows me to calculate,
given that A happened, the probability that e.g., B1 happened. Its pretty useless. More
importantly, because it is derived from the standard probability axioms, we cant use it unless we
also know the exact probability for each element in the set B AND the probability of each
element in this set GIVEN A.
From BT to Bayesianism
This is not something that we can do with Dr. Carriers hypotheses, which are not mutually
exclusive, collectively exhaustive, correspond to known independent (not to mention
conditional) probabilities, or even single events. Of course, Dr. Carrier isnt actually using BT.
Bayesian inference (and Bayesian statistics) are not new, but thanks largely to Fishers distaste
for inverse probability and BT, until fairly recently few used Bayesian methods; everybody
relied on the frequentist interpretation of probability (a fair coin has a probability 1/2 of landing

on heads because if you flipped the fair coin an infinite number of times the distribution would
be .5 heads and .5 tails). It is possible to recognize the efficacy of Bayesian statistics without
having a Bayesian (or subjective) interpretation of probability. In fact, its hard not to. But this is
for reasons that are unnecessarily complicated, almost entirely irrelevant, and involve the ways
in which such inferences are used where frequentist probability isnt particularly suited (like
machine learning). Even the most ardent Bayesian wouldnt agree that Carrier even COULD use
Bayesian inference/probability to evaluate the probability other than as merely a formal
expression of his opinion (his subjective evaluation of the evidence using Bayesian formalism)
that Jesus existed regardless of whether this outcome was one out of a mutually exclusive and
collectively exhaustive set:
Mathematical results of Cox (1946, 1961) and Savage (1954, 1972) prove that if p() and
p(y|) represent a rational persons beliefs, then Bayes rule is an optimal method of
updating this persons beliefs about given new information y. These results give a
strong theoretical justification for the use of Bayes rule as a method of quantitative
learning. However, in practical data analysis situations it can be hard to precisely
mathematically formulate what our prior beliefs are, and so p() is often chosen in a
somewhat ad hoc manner or for reasons of computational convenience. (Hoff, 2010)
I quote this introductory text for two reasons: first, because it is basically impossible to prove
that any method for updating whatever a rational persons beliefs are given new evidence
which means the author is particularly optimistic about the power of Bayesian methods, and
second because despite such an optimistic outlook (which is not, by the way, to be found in a
proof in Cox (1961) or Savage (1954, 1972); I dont have Coxs 1946 article), even here we are
told that problematic data in scientific research can make using Bayesian methods pointless or
worse.
On the importance of reading and understanding your own sources:
Dr. Carrier cites several books on Bayes Theorem, particularly in his Proving History. It does
not appear that he has understood what he has read. He gives perhaps the most complete

bibliography in one place in this book, pp. 300-301. In it, he refers the reader to a proof of
Bayes Theorem from a basic probability textbook Probability, Random Variables, and
Stochastic Processes (2nd Ed.). I alas have only the 3rd & 4th editions, but I cant imagine
Papoulis is so less clear about what Bayes Theorem is that Carriers edition fails to note the
assumptions made. And even if it were this unclear, his other references are quite clear about the
difference between Bayes Theorem and Bayesianism of the type Carrier uses.
His first reference is to Jaynes excellent text Probability Theory: The Logic of Science. Jaynes
dedicates fair portion of chapter five in particular to the ways in which two people whose views
differ initially, who both receive the same new information and who both use Bayesian reasoning
will not necessarily agree. In fact, whatever the new information D, it should tend to bring
different people into closer agreement with each other[a]lthough this can be verified in special
cases, it is not true in general. (p. 127). In the section of his bibliography containing sources
Carrier identifies as for some reason the most technical and advance, we find Peter Lees
Bayesian Statistics, who writes
It should be clearly understood that there is nothing controversial about Bayes Theorem
as such. It is frequently used by probabilists and statisticians, whether or not they are
Bayesians. The distinctive feature of Bayesian statistics is the application of the theorem
in a wider range of circumstances than is usual in classical statistics. In particular,
Bayesian statisticians are always willing to talk of the probability of a hypothesis, both
unconditionally (prior probability) and given some evidence (its posterior probability)
whereas other statisticians will only talk of the probability of a hypothesis in restricted
circumstances. (Lee, 2012)
Lee addresses the difference in the preliminaries section of his book, and instead of determining
what the wider range of circumstances are that Bayesian statisticians apply Bayes rule to,
Carrier points the reader to a proof of Bayes Theorem that cannot be extended to these
circumstances. In another reference (Bergers Statistical Decision Theory and Bayesian Analysis
(2nd Ed.); p. 129) we not only find BT proper but are told explicitly how we must replace all the
elements in the theorems formula with a variable and parameter in order to use it as in Bayesian

statistics (the parameter part is absolutely fundamental to Bayesian methods, but is


entirely lacking from Carriers work). Finally, for those who have read Carriers On the
Historicity of Jesus and wish to compare his descriptions and so forth with what one last source
he lists on p. 301 of his earlier book, namely Hartigans Bayes Theory, Ive scanned and
(attempted to) crop the books formulation and derivation of BT:

References
Berger, J. O. (1985). Statistical Decision Theory and Bayesian Analysis (2nd Ed.) (Springer
Series in Statistics). Springer.
Carrier, R. C. (2012). Proving History: Bayes's Theorem and the Quest for the Historical Jesus.
Prometheus Books.
Carrier, R. C. (2014). On the Historicity of Jesus: Why We Might Have Reason for Doubt.
Sheffield.
Cox, R. T. (1961). The Algebra of Probable Inference. Johns Hopkins Press.
Hartigan, J. A. (1983). Bayes Theory (Springer Series in Statistics). Springer.
Hoff, P. D. (2010). A First Course in Bayesian Statistical Methods (Springer Texts in Statistics).
Springer.
Jaynes, E. T. (2003). Probability Theory: The Logic of Science. Cambridge University Press.
Lee, P. M. (2012). Bayesian Statistics: An Introduction (4th Ed.). Wiley.
Papoulis, A. (1991). Probability, Random Variables and Stochastic Processes (3rd Ed.).
McGraw-Hill.
Papoulis, A., & Pillai, S. U. (2002). Probability, Random Variables and Stochastic Processes
(4th Ed.). McGraw-Hill.
Savage, L. (1972). (1954). The Foundations of Statistics (2nd Rev. Ed.). Dover.

You might also like