The dedicated Mormon missionaries knock on Mac’s door. “Hello, we’re Mormon missionaries and we’d like for you to become a Mormon, too!”

“Why should I become a Mormon?”

“Well, it says in the **Book of Mormon**…”

“Hold it right there! I don’t want to hear a reason from the Book of Mormon, I want to hear one from the **Bible**!”

A circular argument is an argument in which you have to assume the premise in order to reach the conclusion, e.g.:

How do you know I’m trustworthy? Well, it so happens I AM trustworthy, so when I tell you I’m trustworthy, you know you can believe it!

T v ~T/T? Trustworthy or NotTrustworthy, therefore Trustworthy?

(1) T v ~T Trustworthy or NotTrustworthy

(2) ~T -> ~T (1) implication NotTrustworthy implies NotTrustworthy

(3) T -> T (2) transposition Trustworthy implies Trustworthy

(4) T assertion Trustworthy

Therefore, T 3 & 4 *modus ponens *(I am) Trustworthy.

The fallacy, of course, is in line 4: there is no logical reason to assert I am trustworthy, T. I could just as easily be untrustworthy, ~T. Just because I claim trustworthiness does not make me trustworthy (in real life, experience demonstrates that usually people insisting on their trustworthiness are more likely to be untrustworthy). So from the tautology T v ~T, all I can validly prove is T v ~T (or some other tautology) again.

In the absence of external proof, I cannot prove

- I am trustworthy;
- Mormonism is superior to Methodism; or even
- Faith is superior to no faith

For in each case, I am caught in a circular argument in what I can demonstrate or know. We generally consider circular arguments to be fallacies, that is, they sound persuasive at first, but on closer examination turn out to be false.

Epistemic circles are not limited to character or religion. Consider inductive reasoning versus deductive reasoning. Inductive reasoning is what we use to describe, explain, and predict the real world. How do I know if it’s raining? I look out the window or put my hand out the door or feel the ground to see if it is wet. When we depend on the senses, though, we introduce a measure of unreliability—our senses can be fooled (or perhaps we are trapped in The Matrix, being fed false data). For absolute certainty, we must turn to deductive reasoning, where the form of the argument guarantees the result. For instance:

(1) R -> W If it is raining, the ground will be wet.

(2) R It is raining.

Therefore W, the ground will be wet.

This is a form of argument so common and so important that it has its own name, *modus ponens*. It works no matter the contents of R and W; so long as 1 and 2 (the premises) are true, W (the conclusion) inexorably follows. This seems pretty nifty, right? The problem is, though, that it doesn’t tell us whether or not it is raining here now. Deductive logic is like a chess board or a computer program, suspended in time with no connection to the real world. We can call it a well-ordered system, internally coherent and the like, but like its cousin arithmetic, it does not exist in what we call the real world. When done right, you can make some very sophisticated, even beautiful and elegant, arguments within either inductive and deductive logic, but you’re still operating within an epistemic circle.

Beyond character, religion, and logic, though, I would argue the entire rational enterprise is also trapped within an epistemic circle. Why be rational? I can offer a lot of rational reasons to promote rationality, but that is the type of reasoning we claim to want to avoid. R v ~R? Without a reason to prefer R over ~R, the two must be equally weighted. Only a rational person would always choose rationality…right? To do otherwise would be…irrational…. So to break the epistemic circle, I must find an irrational reason to favor rationality over irrationality. The best I’ve been able to come up with thus far is an aesthetic argument: I find it more beautiful to be rational than irrational. This seems a feeble defense of some 50,000+ years of civilization and the conquest of irrationality by the forces of rationality. Just because I have a subjective preference for rationality doesn’t mean that all people will find it equally attractive. Indeed, some rational people make the argument that civilization has been a failure, thus humans should return to the wilds from which we came, ~R. So, using rational arguments, we seem to’ve only demonstrated that rationality is not as foundational as we’ve always thought. We are caught on a precarious perch.

Perhaps the best approach, then, is not to try to break out of the epistemic circle, but to make the circle bigger. Let’s imagine a literal, mathematical circle, say x^{2} + y^{2} = c^{2}. When c^{2}=1, we have a circle with a radius of 1, centered at the coordinate (0,0). When c=2, we have a circle with a radius of 2, centered at (0,0), and so on. If a horizontal line (a line with a slope of 0) represents perfect rationality, all of these circles will only be perfectly rational at the two points where x=0, namely (0,c) and (0,-c). The farther away from the x-axis we get, the less rational they are. But as c increases without bound, the more closely the neighborhoods of (0,c) and (0,-c) will resemble pure rationality. So trivial circular reasoning like “I am trustworthy” is easily recognized and dismissed as faulty, while more advanced circularities require more thought—and more humility as to the limits of human rationality.