1. Note the explicit temporal aspect of the situation described, where one ‘premise’ was known prior to the second’s being learned. Temporal aspects of reasoning will not be considered in Chapters 1—III, but will be taken up in Chapter IV, especially in IV.9 and IV.1o.
2. Entropic uncertainty (see, e.g. Khinchin [35]) will be only occasionally relevant to the concerns of this book (for instance as it can be used to describe the effects of information-acquisition and its ‘inverse’, probability mixing, as in Section 2), and this type of uncertainty is obviously very different from probability of falsity. In fact, maximum probability of falsity entails minimum entropic uncertainty.
3. Brian Ellis [17] and [18] has also independently developed a probabilistic theory of the logic of conditionals which in many respects parallels the present theory. A fundamental difference between Ellis’ approach and the present one is that he treats probability as a ‘concept of truth’. Limitations of space preclude a detailed comparison with the present theory.
4. Probably the most radical implication of the present approach is that we are no longer able to give a uniform ‘semantics’ for arbitrary iterations of compounding by conditionalization, or of forming other sentential compounds with conditional constituents (see Section 8, where these matters are discussed in some detail). Lewis [40] has taken this implication in particular as showing that the present approach makes too radical a departure from orthodox theory.
5. We will attempt so far as possible in this work to sidestep problems having to do with defining p (A ⇒ B) when p (A) equals O. In earlier papers [l] and [2] I made a conventional stipulation that p(A ⇒ B)=1 when p(A)=0, but here we have preferred to leave the `zero antecedent probability case’ an open problem, and have tried to indicate to what extent we may expect further developments in the probabilistic logic of conditionals to depend on that special case.