Propositional Logic

A logical language is designed to make reasoning formally explicit. As a result, it can capture aspects of natural language which determine whether a set of sentences is consistent. As part of this approach, we need to develop logical representations of a sentence 9 that formally capture the truth-conditions of 9. We'll start off with a simple example:

Let's replace the two sub-sentences in (8) by 9 and ^ respectively, and put & for the logical operator corresponding to the English word and: 9 & This structure is the logical form of (8).

Propositional logic allows us to represent just those parts of linguistic structure that correspond to certain sentential connectives. We have just looked at and. Other such connectives are not, or, and if..., then In the formalization of propositional logic, the counterparts of such connectives are sometimes called Boolean operators. The basic expressions of propositional logic are propositional symbols, often written as P, Q, R, etc. There are varying conventions for representing Boolean operators. Since we will be focusing on ways of exploring logic within NLTK, we will stick to the following ASCII versions of the operators:

conjunction &

disjunction |

implication ->

equivalence <->

From the propositional symbols and the Boolean operators we can build an infinite set of well-formed formulas (or just formulas, for short) of propositional logic. First, every propositional letter is a formula. Then if 9 is a formula, so is -9. And if 9 and ^ are formulas, then so are (9 & (9 | (9 -> and(9 <->

Table 10-2 specifies the truth-conditions for formulas containing these operators. As before we use 9 and ^ as variables over sentences, and abbreviate if and only if as iff.

Table 10-2. Truth conditions for the Boolean operators in propositional logic Boolean operator Truth conditions negation (itis not the case that...) -9 is true in s iff 9 is false in s conjunction (and) (9 & is true in s iff 9 is true in s and ^ is true in s

Boolean operator disjunction (or) implication (if..., then ...) equivalence (if and only if)

Truth conditions

(9 | is true in s iff 9 is true in s or ^ is true in s

(9 -> is true in s iff 9 is false in s or ^ is true in s

(9 <-> is true in s iff 9 and ^ are both true in s or both false in s

These rules are generally straightforward, though the truth conditions for implication depart in many cases from our usual intuitions about the conditional in English. A formula of the form (P -> Q) is false only when P is true and Q is false. If P is false (say, P corresponds to The moon is made of green cheese) and Q is true (say, Q corresponds to Two plus two equals four), then P -> Q will come out true.

NLTK's LogicParser() parses logical expressions into various subclasses of Expression:

>>> lp = nltk.LogicParser() >>> lp.parse('-(P & Q)') <NegatedExpression -(P & Q)> >>> lp.parse('P & Q') <AndExpression (P & Q)> >>> lp.parse('P | (R -> Q)') <OrExpression (P | (R -> Q))> >>> lp.parse('P <-> -- P') <IffExpression (P <-> --P)>

From a computational perspective, logics give us an important tool for performing inference. Suppose you state that Freedonia is not to the north of Sylvania, and you give as your reasons that Sylvania is to the north of Freedonia. In this case, you have produced an argument. The sentence Sylvania is to the north of Freedonia is the assumption of the argument, while Freedonia is not to the north of Sylvania is the conclusion. The step of moving from one or more assumptions to a conclusion is called inference. Informally, it is common to write arguments in a format where the conclusion is preceded by therefore.

(9) Sylvania is to the north of Freedonia.

Therefore, Freedonia is not to the north of Sylvania.

An argument is valid if there is no possible situation in which its premises are all true and its conclusion is not true.

Now, the validity of (9) crucially depends on the meaning of the phrase to the north of, in particular, the fact that it is an asymmetric relation:

(10) if x is to the north of y then y is not to the north of x.

Unfortunately, we can't express such rules in propositional logic: the smallest elements we have to play with are atomic propositions, and we cannot "look inside" these to talk about relations between individuals x and y. The best we can do in this case is capture a particular case of the asymmetry. Let's use the propositional symbol SnF to stand for Sylvania is to the north of Freedonia and FnS for Freedonia is to the north of Sylvania. To say that Freedonia is not to the north of Sylvania, we write - FnS. That is, we treat not as equivalent to the phrase it is not the case that..., and translate this as the one-place Boolean operator -. Replacing x and y in (10) by Sylvania and Freedonia respectively gives us an implication that can be written as:

How about giving a version of the complete argument? We will replace the first sentence of (9) by two formulas of propositional logic: SnF, and also the implication in (11), which expresses (rather poorly) our background knowledge of the meaning of to the north of. We'll write [A1, ..., An] / C to represent the argument that conclusion C follows from assumptions [A1, ..., An]. This leads to the following as a representation of argument (9):

This is a valid argument: if SnF and SnF -> -FnS are both true in a situation s, then -FnS must also be true in s. By contrast, if FnS were true, this would conflict with our understanding that two objects cannot both be to the north of each other in any possible situation. Equivalently, the list [SnF, SnF -> -FnS, FnS] is inconsistent—these sentences cannot all be true together.

Arguments can be tested for "syntactic validity" by using a proof system. We will say a little bit more about this later on in Section 10.3. Logical proofs can be carried out with NLTK's inference module, for example, via an interface to the third-party theorem prover Prover9. The inputs to the inference mechanism first have to be parsed into logical expressions by LogicParser().

True

Here's another way of seeing why the conclusion follows. SnF -> -FnS is semantically equivalent to -SnF | -FnS, where | is the two-place operator corresponding to or. In general, 9 | ^ is true in a situation s if either 9 is true in s or 9 is true in s. Now, suppose both SnF and -SnF | -FnS are true in situation s. If SnF is true, then -SnF cannot also be true; a fundamental assumption of classical logic is that a sentence cannot be both true and false in a situation. Consequently, -FnS must be true.

Recall that we interpret sentences of a logical language relative to a model, which is a very simplified version of the world. A model for propositional logic needs to assign the values True or False to every possible formula. We do this inductively: first, every propositional symbol is assigned a value, and then we compute the value of complex formulas by consulting the meanings of the Boolean operators (i.e., Table 10-2) and applying them to the values of the formula's components. A Valuation is a mapping from basic symbols of the logic to their values. Here's an example: >>> val = nltk.Valuation([('P', True), ('Q', True), ('R', False)])

We initialize a Valuation with a list of pairs, each of which consists of a semantic symbol and a semantic value. The resulting object is essentially just a dictionary that maps logical symbols (treated as strings) to appropriate values.

As we will see later, our models need to be somewhat more complicated in order to handle the more complex logical forms discussed in the next section; for the time being, just ignore the dom and g parameters in the following declarations.

Now let's initialize a model m that uses val: >>> m = nltk.Model(dom, val)

Every model comes with an evaluate() method, which will determine the semantic value of logical expressions, such as formulas of propositional logic; of course, these values depend on the initial truth values we assigned to propositional symbols such as P, Q, and R.

>>> print m.evaluate('(P & Q)', g) True

>>> print m.evaluate('-(P & Q)', g) False

>>> print m.evaluate('(P & R)', g) False

Your Turn: Experiment with evaluating different formulas of proposi-tional logic. Does the model give the values that you expected?

Up until now, we have been translating our English sentences into propositional logic. Because we are confined to representing atomic sentences with letters such as P and Q, we cannot dig into their internal structure. In effect, we are saying that there is no semantic benefit in dividing atomic sentences into subjects, objects, and predicates. However, this seems wrong: if we want to formalize arguments such as (9), we have to be able to "look inside" basic sentences. As a result, we will move beyond propositional logic to something more expressive, namely first-order logic. This is what we turn to in the next section.

0 0

Responses

  • Savanna
    Why are the rules of implication important in understanding logic?
    6 years ago

Post a comment