Where I address the system of belief change I will use in this work.
Perhaps one of the most difficult of human behaviors is belief change. Our minds seem to naturally prefer to retain prior beliefs regardless of internal contradiction or external disconfirmation, and many of us have personally experienced how our perceptions and attitudes can warp to preserve prior beliefs in the face of new evidence or to prevent previous errors being revealed. This is why we must develop a systematic approach to belief change so that we may consistently perform “reality-checks” as successfully as possible.
In doxastic logic (the logic of beliefs), we talk about belief revision in terms of a “database”, a logically closed set of sentences held to be simultaneously true. In epistemology, this would be called a “system of beliefs” or “belief set”, and in logic this would be called a “theory”. In our case, of course, the sentences in our system of beliefs do not necessarily need to be consistent, but their paraconsistency must be bounded firmly within certain limits (in our case the special justifications that are given for each paraconsistent premise).
We will also assume that one is doxastically committed to believing the logically contingent sentences that follow from the sentences in their database, thus all sentences which follow logically from the sentences in the database are actually included within the database. This property is called “contingent closure”, which means the database is “closed” under the operation of logical consequence. In practice, of course, it is unlikely that someone actually believes every logical consequence of the set of beliefs they actually hold. Such a state of belief would require a sort of standard of logical omniscience to which no actual doxastic agent could be held. None of us are perfectly logical reasoners, and none of us can enunciate a complete list of all of our beliefs, much less all of the consequences of our beliefs, much further less force ourselves to actually believe all those consequences. However, this idealization of our doxastic logic proves useful when we want to talk about paradigms and theories, so we will retain the assumption for utilitarian purposes.
Now let us examine some simple cases of belief change. The most common model of belief change in current “Western” anglophone philosophy is called the “AGM model”, named after the three inventors of the model, Carlos Alchourrón, Peter Gärdenfors, and David Makinson (1985). This model has three possible operations: contraction, expansion, and revision.
In contraction, a sentence “p” is removed from the database. This also revises all consequent sentences in the database that contained “p” as an element. In effect, the belief set “D” (for database) is replaced by a new system of beliefs “D-p” (or in AGM notation, “D÷p”).
In expansion, a sentence “p” is added to the database and that is the only change made. In effect, the belief set “D” is replaced by the smallest belief set “D+p” that has the property of closure (see above).
In revision, a sentence “p” is added to the database and all sentences that would contradict “p” are removed from the database. In effect, the belief set “D” is replaced by the belief set “D+p-not p”, or in the notation of AGM, “D*p”.
Why is this a model useful? Because it is not enough to simply list beliefs when we want to talk about belief change. We don’t just want to talk about disconnected sentences we believe to be true, false, or—in very rare cases—both. We want to talk about how our beliefs “hang together” to form a total theory of the world. And what motivates this desire to see and test how our beliefs hang together? We engage in belief testing and belief change because in almost all cases we want to be consistent in our beliefs and to be able to detect inconsistencies in the statements and beliefs of others. This is because we tend to see contradictions in our own beliefs as problematic, and we usually see contradictions in the statements of others as an indication of either faulty reasoning or deception, both of which we tend to be socially motivated to detect for utilitarian purposes. While the paraconsistent logic we are using in this work is tolerant of inconsistency, it must always be remembered that all usage of paraconsistency must be both specially justified and limited to paradoxes and certain mystical insights to avoid logical “explosion”, the state of all possible propositions becoming arbitrarily true. We almost always want to be consistent and we almost always want others to be consistent, so we are strongly motivated to make sure that our beliefs as a whole—or at least within a given domain—“hang together” consistently.
Now let’s look at some of the problems that show up in belief change and some strategies to address them. In the following database, all four statements are held to be true and classically consistent.
p
q
If (p and q) then r
Not r
Now, obviously we have a problem here, because if sentence 4 is true, then one or more of the sentences 1-3 must be false. Given this situation, what should we do?
Our first strategy should obviously be to test each of the sentences in our database against our evidence, by which I mean our data about the actual facts or states of affairs, either mental or physical, in the world. If any of the sentences in our database contradict any of these prior epistemological commitments, we have a good case for revising the database. However, this isn’t a foolproof strategy, because sometimes there are problems in our paradigm or theory, and sometimes there are errors—or at least apparent contradictions—in our data set. So while this is a first place to start, it may not resolve our problem.
Let’s say that everything fits the evidence…what then? We can proceed stepwise through our database and find the problematic sentences and then “contract” our database by removing them and revising any dependent sentences appropriately. In this case, the problematic sentence is either sentence 3 or sentence 4 or both. But we naturally want to revise our database of beliefs as little as possible; we don’t want to sacrifice important knowledge. If we subtract sentence 3 or sentence 4 from the database, our remaining sentences become consistent…they “hang together”. But which one should we subtract? Or should we subtract both? Generally we want to delete the fewest sentences possible because we usually want to preserve knowledge, so if we aren’t forced to subtract both sentences, we shouldn’t. Is it possible to make a principled decision between the two? Peter Gardenförs (1988) argued convincingly that contraction of belief sets should be governed by a principle of “epistemic entrenchment”, where we say that given an exclusive choice between retaining two beliefs “p” and “q”, we should choose the belief that is more useful to inquiry and has more explanatory power. For our purposes we will choose to retain the sentence in our belief sets that most tends to the production, possession, and use of knowledge (defined as one of our primary goals in the introduction to this work), which we have already defined as “justified true belief” (defined in the section of this work on epistemology). This means we should choose to subtract sentence 3 rather than sentence 4, because sentence 4 gives us a new proposition, a new datum about a fact or state of affairs in the world, while sentence 3 defines a relation between sentences 1 and 2. We may well be able to rediscover any lost relations by reexamining the relevant relata, but if we lose a relata by deleting sentence 4 we may lose important knowledge forever.
This simple example presents the basic epistemic situation available to us continuously. We constantly have to decide how to add, subtract, and revise beliefs in order to both accurately reflect and usefully interact with the world. Over the course of this work, I will test various ideas and revise them as necessary using methods like this and attempt to present them as clearly as possible.
Now, how does all of this apply to the alien? They know that some of their beliefs are incomplete or mistaken, but they don’t know which ones. Their task, as defined in the section of this work on the “Xeno Position”, is to learn as much as possible about the world in which they find themselves (both mental and physical) and about the purpose with which they have come to be in this world, with the goal of then working to fulfill that purpose. This task is difficult under the best of circumstances, but the problem of justifying not only a change of beliefs in general but any specific change in particular seems impossible. In short, the alien needs help. Can anything help differentiate true from false (or at least useful from useless or helpful from harmful) beliefs when the logic and evidence seems to support both but the particular issue is not subject to paraconsistency? How can the alien revise their beliefs to most accurately and usefully engage with reality, especially the realities of magick, mysticism, and living the spiritual life?
We will explore the answer to this question in the next section, where I will introduce the concept of resonance.
Works Cited:
Carlos E. Alchourrón, Peter Gärdenfors, and David Makinson, On the Logic of Theory Change: Partial Meet Contraction and Revision Functions, (Journal of Symbolic Logic, Number 50, pp. 510–530)
Peter Gärdenfors, Knowledge In Flux: Modeling the Dynamics of Epistemic States, (MIT Press, 1988)
