This chapter presents a simplified introduction to the mathematical theory of semantics of logic programs. The use of program clause trees in previous chapters was an attempt to motivate semantical issues for the Prolog programmer. This chapter will show the connection between the clause tree approach and the traditional approaches. For a more extensive treatment of the standard theoretical approaches see Lloyd (1984,1987).

The formal definition of a logic program has several components. First, there is an
*alphabet* whose symbols are used to specify the *variables*, *constants*, *functions*, and
*predicates*. The alphabet should also have the symbols ' < -', ',', and '.'.

A *term* is either a variable, a constant, or a term structure of the form

f(t1,...,tn)

where t1,...,tn are terms, and f is an n-ary function. A *literal* is either a 0-ary predicate
symbol or a predicate structure of the form

p(t1,...,tn)

where t1,...,tn are terms and p is an n-ary predicate. A *clause* is either a unit clause, which
has the form

h < -

or else has the form

h < - b1,...,bk

where h,b1,...,bk are literals. h is referred to as the *head* literal, b1,...,bk the *body* literals.
A formal *logic program* is a sequence of clauses. A *goal* for logic program P is
defined to be a sequence of the form

< -g1,g2,...,gn

of literals, where the gi are referred to as *subgoals*.

These definitions provide a characterization of the syntax of formal logic programs.

*Exercise 9.1* Choose several Prolog programs from Section 2 and describe the
corresponding formal logic program. Of course, it is intended that Prolog variables be
taken as formal variable symbols, Prolog functions symbols as formal functions, and
Prolog predicate symbols as formal predicate symbols. Note that an evaluable predicate of
a Prolog program, such as '< ', is treated as just a formal symbol when the Prolog program
is considered as a formal logic program, but there will be some intended interpretation of
that symbol when one considers the meaning of the program. (See section 9.7 in this
regard.)

The exercise is intended to stress the distinction between a Prolog program and a formal logic program, as well as to illustrate the connections between the two. The mathematical theory studies formal logic programs.

The reader should already have an intuitive understanding of unification from working
with Prolog programs. If e1 and e2 are two logic expressions (terms, literals or clauses), then a unifier for e1 and e2
is a substitution of variables s such that s(e1) = s(e2). A *most general unifier (mgu)* for
e1 and e2 is a unifier s such that if t is another unifier then t = ns (s applied first, then n
afterward) for some substitution of variables n; that is, t would be a specialization of s.

Of interest to us here is a clear characterization of the unification of Prolog terms, calculating a most general unifier. For emphasis, let us spell out the salient cases:

1) A variable x and a term t unify provided that x does not occur in t. This unification produces an answer substitution 'x/t', meaning "for x substitute t".

2) Two constants c1 and c2 must be identical in order to unify. This unification produces the identity substitution.

3) Two terms, or literals, p(s1,...,sk) and q(t1,...,tk) unify provided that p and q are identical and that si unifies with ti using substitution si, i=1,...,k, producing a composite substitution s=s1...sk.

The proviso in 1) that x not occur in t is the occurs-check. Most Prolog implementations avoid the occurs-check for the sake of computational efficiency. This affects the soundness of Prolog adversely, as we will illustrate in section 9.7.

*Exercise 9.2* Write a Prolog unifier that will perform the occurs-check.

An interpretation of a logic program is suppose to assign meaning (semantics) to a formal logic program. For example, consider the following program

insert(X,[],[X]) insert(X,[Y|Z],[X,Y|Z]) < - X < Y insert(X,[Y|Z],[Y|R]) < - Y < X, insert(X,Z,R)

If we interpret "[...|...]" as the usual list-constructor function, " < " as the less-than relation for numbers, and "insert" as the appropriate ordered insertion relation then the program clauses are all true when interpreted over the domain of integers (as well as over many other number domains). The symbol '< -' is always interpreted as Boolean "if", and ',' as "and". Obviously, one could interpret the program over character domains, or over many other ordered domains, in such a way that all of the clauses come out true. An interpretation that makes all of the program clauses true is called a model of the program.

The mathematical definition of an interpretation of a logic program requires that there be a domain set D, and the following assignments:

(a) each constant of P is assigned to an element of D;

(b) each function symbol of P is assigned to some particular function over domain D having the same arity (number of arguments);

(c) each predicate symbol of P is assigned to some particular relation on D having the same arity.

Additional requirements must hold. Suppose that we name the interpretation I, and that if s is a symbol of P then I(s) refers to the value assigned over D, according to (a), (b), or (c). An assignment of variables A is a function from the variables of P to the domain D. The value of a term containing variables depends upon an assignment of variables, as follows.

(d) I(A)(x) = A(x) if x is a variable symbol of P

(e) I(A)(c) = c if c is a constant symbol of P

(f) I(A)(f(t1,...,tn)) = I(f)(A(t1),...,A(tn)) for each function f of P and term arguments t1,...,tn.

Now we define the "truth" of clauses with respect to an interpretation.

(1) for a unit clause,

I(p(t1,...,tn)) = true if and only if I(p)(I(A)(t1),...,I(A)(tn)) = true, for every assignment of variables A.(2) for a nonunit clause,

I(h < - b1,...,bk) = true unless there is some assignment of variables A such thatI(A)(h) = false and I(A)(b1)=...=I(A)(bn) = true

As stated previously, an interpretation I is a *model* for P provided I(c)=true for every
clause c of P. We say that I *satisfies* a goal < -g1,g2,...,gn provided that I satisfies each of
the gi.

The *Herbrand universe* of a logic program P consists of all of the terms that can be made
using the constants and function symbols of P. If P has no constants, a single dummy
constant is assumed. For example, the Herbrand universe of the logic program

p(x,y) < - q(x), r(y) p(f(x,y)) < - p(y,x) p(f(a,b))

is infinite but includes elements {a, b, f(a,a), f(b,b), f(a,b), f(b,a), f(f(a,a),a),...}. An
interpretation of P whose domain is (a subset of) the Herbrand universe of P is called a
Herbrand interpretation of P. If p is an n-ary predicate of P of arity n, then the ground
instances of the literal p(x1,...,xn) are of the form p(t1,...,tn) where t1,...,tn are elements of
the Herbrand universe. The *Herbrand base* B(P) of P consists of all possible ground
instances of literals of P. The Herbrand base of the sample program above is infinite and
includes {p(a,a), p(a,b), p(b,f(a,a),...,p(f(a,b),b),...}.

Any subset of B(P) also specifies an interpretation of P. If S is a subset of B(P), then in the corresponding interpretation only the ground literals of S are true, and the other literals in B(P)\S are taken to be false. The minimal Herbrand model of P can be constructed as follows.

M1 = {c | c in B(P) is a ground instance of an atomic clause of P}

M2 = {c | c < - b1,...bn is a ground instance of a clause of P and b1,...,bk belong to M1} U M1

...

Mk = {c | c < - b1,...,bk is a ground instance of a clause of P and b1,...,bk belong to Mk-1} U M(k-1)

Then

M(P) = U Mi (i >=1)

is a minimal model for P. Another characterization of M(P) is given as an intersection:

M(P) = intersection {M | M is a subset of B(P) and M is a model of P}

so M(P) is indeed the unique smallest Herbrand model of P. It should be clear that M(P) satisfies a ground goal < - g1,g2,...,gn if and only if, each of the gi belongs to M(P). For a characterization of M(P) in terms of fixed points for monotonic operators, see Lloyd (1984,1987).

*Example.* Suppose that P is the following logic program (x and y are variables, a and e
are constants):

p(x,y) < - q(x), r(y). p(a,x). p(e,a). q(x) < - s(x). s(b). r(e).

Then M(P) is calculated using a kind of "forward deduction": basic facts are obtained from unit clauses M1, these and program clauses are used to derive new conclusions M2, etc.

M1 = {p(a,a), p(a,b), p(a,e), p(e,a), s(b), r(e)} M2 = M1 U {q(b)} M3 = M2 U {p(b,e)} = M(P)

The definitions are easy to extend to the case of interpretations over domains other than the Herbrand universe. The reason for doing this should be obvious from consideration of the Prolog programs that have appeared in the previous chapters of this primer, and, in particular, the program that started this chapter. Prolog is an "untyped" language. However, it has been implicitly assumed that there was a relevant domain of individuals which provide "inputs" for computing goals involving the program. To this end, let us suppose that P is a logic program and that D is a specified domain of individuals that comprise allowable constants that may appear in the terms of P, and that D contains the constants that actually do explicitly appear in P. The D-universe of P consists of all of the terms that can be formed using the constants from D; we still refer to these terms as ground terms or as ground terms over D if necessary. Then, the yield of P over D, M(P,D) is defined inductively, as follows

Y1 = {c | c is a ground instance over D of an atomic clause of P}

and for k>1,

Yk = {c | c < - b1,...,bk is a ground instance over D of a clause of P and b1,...,bk belong toY(k-1)} U Y(k-1)

Then define

M(P,D) = U Yi (i >=1)

Let B(P,D) refer to the set of ground instances of literals over D that can be formed using predicates from P. Then, in a fashion similar to that for B(P), models over D can be identified with subsets of B(P,D). That is, a subset of B(P,D) specifies the set of true facts in the model over D, and any model over D determines a subset of B(P,D) consisting of ground literals over D that are true in that model. As for the minimal Herbrand model, we have

M(P,D) = intersection {M | M is a subset of B(P,D) and M is a model of P over D}.

*Exercise 9.3* In the example above the Herbrand base of the program was B={a,e}.
Extend B to domain D={a,e,b,c} and calculate M(P,D). Why is M(P,D) different that M(P)?
Characterize the goals that are satisfied over D as opposed to those that are satisfied over
B. Discuss this issue of domains for the corresponding Prolog program obtained by
capitalizing variable names, replacing ' < -' by ':-' and inserting a period '.' at the end of each
clause. For the Prolog program, can the goal ?- p(a,X) ever produce the binding X = b?
Why?

Previous chapters have characterized consequences of Prolog programs using the notion
of program clause trees. It is fairly clear how to specify the definition for logic programs. If P is
a logic program and D is a domain of constants containing the constants of P, then a
*P-tree* over D is defined just as before (section 2.1), where the branchings are determined by
the clauses of P. Previously, this primer has defined a ground literal g over D to be a
consequence of P over D provided that there is a program clause tree having root g all of whose
leaves are true. The following theorem justifies this terminology.

*Theorem. For a ground literal g of P over domain D, there is a finite P-tree rooted at g
having all true leaves if, and only if, g belongs to M(P,D). *

*Exercise 9.4* For the mathematically inclined, prove the theorem. Characterize the
relationship between the Yi's and the heights of certain corresponding program clause trees.

One important issue has been glossed over in the previous two sections on semantics. In the example that started Section 9.3 ('insert') there was some sort of intended interpretation for the function '[..|..]' and for the relation ' < '. Thus, for example, over the domain D of integers, we want to say that the ground literal

insert(-2,[-7,-3,2,5],[-7,-3,-2,2,5])

is a consequence of the program, if ' < ' is interpreted as the less-than relation for integers. The following clause tree justifies this claim:

Fig. 9.2

where the two left-most leaves of the tree are true in the intended interpretation of '<' on integers. The other branchings based on the clauses of the program used the intended interpretation of '[..|..]' as the listing function in order to unify terms.

For "practical" programs there is very often some intended interpretation whose properties are assumed (or presumed) rather than explicitly specified.

*Exercise 9.5* Using clause trees show what happens when the 'insert' program is used
on lists with repeated elements.

The Prolog derivation trees of section 2.1 and 3.1 provided a good motivation for a formal definition of deduction for logic programs. We saw there the SLD type of proof:

S -- selection rule for choosing the next subgoal to work on

L -- linear descent down the deduction tree

D -- depth-first investigation of the deduction tree

As previously seen, Prolog uses a leftmost selection rule. That is, for a goal ?- g1,...gn, choose g1 as the next goal to work on. An input resolution of a goal using a side clause amounts to the following kind of computation:

< - a1,...,ak | |-- a < - b1,...,bn | < - b1',...,bn',a2',...,ak'

where a < - b1,...,bn is a program clause, a1 and a have an mgu s, and b1',...,bn',a2',...,ak'
is the goal that results when s is applied to b1,...,bn,a2,...,ak. Notice that a unit side
clause would have the simple form 'a' and there would be no bi's. We say that the bottom
goal results from the top goal using *input resolution*.

Suppose that P is a logic program and that g is a goal. An SLD-derivation of P U {g} consists of a sequence g0=g,g1,g2,..., where each gk+1 results from gk using input resolution involving an mgu sk. An SLD-refutation of P U {g} is an SLD-derivation g0=g,g1,...,gm=nil; the corresponding computed answer substitution is the composition of the mgu's.

SLD-refutations are a mathematical characterization of Prolog derivations. The various branches of a Prolog deduction search tree that end in 'true' correspond to SLD- refutations. A computed answer substitution corresponds to the answer that Prolog calculates down a particular branch of the derivation search tree.

It is possible to use any reasonable selection rule for choosing the next subgoal, and the soundness and completeness results of the next subsection still apply.

See Exercise 3.3.3 for a meta-interpreter that simulates SLD-deduction. (Also, consult the discussion in Section 3.3.)

If goal g2 results from goal g1 using input resolution involving side clause `a < - b` of logic
program P, using mgu s, then s(g1) is a logical consequence of s(g2) and s(b). That is,
in every interpretation of P for which s(g2) and s(b) are true, it is also the case that s(g1)
is true. (If b is nil then u(g1) is a consequence of s(g2) alone.) Thus, if g is the top-most
goal of an SLD-refutation, and s=s0s1...sk is the sequence of unifiers used in the
refutation, then s(g) is a logical consequence of P. Thus SLD-refutations yields correct
answer substitutions.

*Soundness Theorem If P is a logic program and < - g is a goal, and s is an answer
substitution computed by some SLD-refutation, then
s(g) is a logical consequence of P, or equivalently,
for every ground instance of s(g), there is a program clause tree rooted at the ground instance
having all true leaves. *

Prolog is usually implemented without the occurs check in unification. This allows for some unsoundness. For example, consider the following logic program.

unsound < - parent(x,x) parent(x,father(x))

Convert this program to Prolog, and try the goal:

?- unsound. yes

It is possible to construct more realistic, and more complicated, examples where it could be harmful not to have an occurs-check. If one does not have an occurs-check available, then programs must be more carefully written in order to avoid incorrect answers.

The basic completeness result for SLD-deduction can be stated as follows.

*Completeness Theorem Suppose that g is a goal, that s is a substitution of variables,
and that s(g) is a member of M(P). Then there is some SLD-refutation of g with computed
answer substitution t such that s=nt for some substitution of variables n; that is, s is a
specialization of substitution t. *

The completeness theorem means that every correct answer for a query can be computed (perhaps in a more general form) using some SLD-refutation. That one needs to allow the calculation of a more general answer is a technicality and is illustrated by the following simple example. Let P be the program

p(x).

If s=x/bill (substitute bill for x), then s(p(x))=p(bill) is a formal logical consequence of P, but notice that 'bill' is not in the Herbrand universe of P, that is, 'bill' has nothing to do with the program P. So no possible SLD-refutation could actually compute answer 'bill'. However, goal '< - g(x)' succeeds calculating answer substitution t=x/x. If w=x/bill, then we have s = wt, as claimed. Notice that we would also say that p(bill) is a consequence of P over any domain D={bill,..}, as a very simple P-tree over D would establish.

The incompleteness of Prolog was discussed in section 3.3. Recall that it is the strictness of Prolog's choice mechanism that causes this. The abstract characterization of SLD- deduction allows the use of any side-clause choices that are available.

Reread the definition of M(P,D) given above. It should be intuitively clear that if c belongs to Yk, then there is in fact an SLD-refutation of P U {c} in k or fewer steps, and a program clause tree rooted at c with all true leaves of height k (or less). The "forward" steps needed to calculate that c is in Yk use clauses of P in the reverse order that they would be used in a some corresponding SLD-refutation of P U {c}. In addition, if c belongs to Yk, then there is some program clause tree rooted at c having all true leaves, such that the tree has total height less than or equal to k. (Use the definition of height for program clause trees that was computed in section 2.5.)

Logic programs with negation as failure are logic programs whose clauses can have negative literals, which we will write using 'not'. For example,

p(x) <- q(x), not(r(x)). q(a) <- not(s(x)). q(b). r(a). s(c).

is a logic program with negation as failure. For such programs we can define program clause trees in a fashion similar to that for positive programs (that is, programs without this negation). The program clause trees are grown in the same way but when a negative leaf 'not(k)' is encountered that branch is full.

Suppose that P is a logic program with negation. An ancestor trail is a sequence a0,a1,...,an of nodes in P-trees such that a(i+1) is either a positive node which is a child of ai or else a(i+1) = k where 'not(k)' is a child of ai. Note that ancestor trails can wind through several trees. Trails can leave a particular tree at a negative leaf. An extension of an ancestor trail a0,a1,...,an must itself be an ancestor trail of the form a0,a1,...,an,an+1,...am. We say that P satisfies the finite trail property provided that every ancestor trail for P is only finitely extendible.

*Theorem. If P satisfies the finite trail property then every ancestor trail for P must
eventually terminate with 'true' or at a node 'b' such that 'b' does not unify with the head
of any clause of P. In particular, no trail could have a repeated node. *

The finite trail property is very restrictive, and the concept is not generally decidable for logic programs with negation. Nevertheless, the vast majority of Prolog programs in this primer have the finite trail property when considered as an abstract logic program. (Find some that do not.)

Suppose that P has the finite trail property. Then a positive ground literal g is a consequence of P provided that there is some program clause tree T rooted at g such that every leaf of T is either true or else is a negative leaf of the form 'not(k)' where k is not a consequence of P.

Note that the previous example satisfies the finite trail property. Now p(b) is a consequence of the program, as established by the program clause tree

Fig. 9.8.1

since r(b) cannot possibly be a consequence of the program. On the other hand, p(a) is not a consequence of the program. Efforts to construct a substantiating program clause tree for p(a) lead to the consideration of

Fig. 9.8.2

but s(c) is a consequence of the program.

*Exercise 9.8.1* How does the definition of consequence given in this section compare with
the definition given previously when considering positive programs?

*Exercise 9.8.2* Show that the definition of consequence using clause trees cannot be
infinitely circular if P satisfies the finite trail property.

*Exercise 9.8.3* Show that the logic program

a < - not(b) b < - not(a)

does not have the finite trail property.

*Exercise 9.8.4* Show that the logic program

p(X) < - p(f(X))

does not have the finite trail property.

A modified form of SLD deduction can compute answers for logic programs with negation. The modified form is called SLDNF deduction, where the NF stands for negation as failure. The situation is similar to that for positive programs, except that now goals can be negative. When a negative goal 'not(k)' is selected, then a deduction of k ensues. If this deduction succeeds, then the goal 'not(k)' fails; otherwise the goal 'not(k)' succeeds. Note that this definition is indeterminate in case one negative goal leads to another, which in turn leads to another, etc. But for programs satisfying the finite trail property, this cannot happen. The research literature studies related issues, one of which is the concept of a stratified logic program.

Various soundness and completeness results are available for SLDNF deduction. See Lloyd (1984,1987) for examples and further references. The approach presented here based upon clause trees is unique but is related to the traditional approaches. Logic programs with negation are very important for default reasoning, and for deductive databases. There are other semantic formulations for logic programs with negation, including so-called well-founded and stable semantics; see Van Gelder, et.al. (1991). Program clause trees can play a useful role for these alternate formulations of semantics as well.

Prolog Tutorial Contents