## Individual Variables and Assignments

In our models, the counterpart of a context of use is a variable assignment. This is a mapping from individual variables to entities in the domain. Assignments are created using the Assignment constructor, which also takes the model's domain of discourse as a parameter. We are not required to actually enter any bindings, but if we do, they are in a (variable, value) format similar to what we saw earlier for valuations.

>>> g = nltk.Assignment(dom, [('x', 'o'), ('y', 'c')]) >>> g

In addition, there is a print() format for assignments which uses a notation closer to that often found in logic textbooks:

Let's now look at how we can evaluate an atomic formula of first-order logic. First, we create a model, and then we call the evaluate() method to compute the truth value:

True

What's happening here? We are evaluating a formula which is similar to our earlier example, see(olive, cyril). However, when the interpretation function encounters the variable y, rather than checking for a value in val, it asks the variable assignment g to come up with a value:

Since we already know that individuals o and c stand in the see relation, the value True is what we expected. In this case, we can say that assignment g satisfies the formula see(olive, y). By contrast, the following formula evaluates to False relative to g (check that you see why this is).

In our approach (though not in standard first-order logic), variable assignments are partial. For example, g says nothing about any variables apart from x and y. The method purge() clears all bindings from an assignment.

If we now try to evaluate a formula such as see(olive, y) relative to g, it is like trying to interpret a sentence containing a him when we don't know what him refers to. In this case, the evaluation function fails to deliver a truth value.

>>> m.evaluate('see(olive, y)', g) 'Undefined'

Since our models already contain rules for interpreting Boolean operators, arbitrarily complex formulas can be composed and evaluated.

>>> m.evaluate('see(bertie, olive) & boy(bertie) & -walk(bertie)', g) True

The general process of determining truth or falsity of a formula in a model is called model checking.