The theoretical foundations of LPTP (a logic

tactic operators S, F and T for success, failure and universal termination of queries. The operators are not modal operators. They are just abbreviations for other ...
335KB taille 54 téléchargements 354 vues
The theoretical foundations of LPTP (a logic program theorem prover) Robert F. St¨ark∗ Institute of Informatics, University of Fribourg Rue Faucigny 2, CH–1700 Fribourg, Switzerland Email: [email protected]

Abstract This article contains the theoretical foundations of LPTP, a logic program theorem prover that has been implemented in Prolog by the author. LPTP is an interactive theorem prover in which one can prove correctness properties of pure Prolog programs that contain negation and built-in predicates like is/2 and call/n + 1. The largest example program that has been verified using LPTP is 635 lines long including its specification. The full formal correctness proof is 13128 lines long (133 pages). The formal theory underlying LPTP is the inductive extension of pure Prolog programs. This is a first-order theory that contains induction principles corresponding to the definition of the predicates in the program plus appropriate axioms for built-in predicates. The inductive extension allows to express modes and types of predicates. These can then be used to prove termination and correctness properties of programs. The main result of this article is that the inductive extension is an adequate axiomatization of the operational semantics of pure Prolog with built-in predicates. Keywords: Verification of logic programs; pure Prolog; left-termination; induction.

1

Introduction

It has often been claimed that programs written in a declarative programming language are easier to verify than imperative programs. There are, however, only a few examples of non-trivial declarative programs that have been verified formally. To support the claim we have implemented an interactive theorem prover LPTP in which one can verify pure Prolog programs of several hundred lines of code. ∗

Research supported by the Swiss National Science Foundation. Appeared in: Journal of Logic Programming, 36(3):241-269, 1998

1

The reason that it is possible to verify programs of this size in LPTP is that LPTP works with the declarative meaning of pure Prolog programs only. The declratative meaning of a pure Prolog program is given by its inductive extension. This is, roughly speaking, Clark’s completion plus induction along the definition of the predicates. The Prolog code of a predicate is translated into three positive elementary inductive definitions expressing success, finite failure and universal termination of the predicate. In this way we avoid the overhead that is usually created by a direct formalization of the Prolog query evaluation procedure. Consider, for example, the well-known predicate append/3: append([], l, l). append([x|l1 ], l2 , [x|l3 ]) :- append(l1 , l2 , l3 ). How can we prove properties of append/3? There are two possible approaches: the declarative approach and the operational approach. In the declarative approach, the two clauses of append/3 are considered as an inductive definition that defines a ternary relation between finite trees. Properties of this relation are then proved by induction on the definition of the relation. In the base case one has to show that ϕ([], l, l) is true. In the induction step one has to prove ϕ([x|l1 ], l2 , [x|l3 ]) from the induction hypothesis ϕ(l1 , l2 , l3 ). The conclusion is then that ϕ(l1 , l2 , l3 ) is true whenever append(l1 , l2 , l3 ) holds. The variables x, l1 , l2 , l3 range over finite trees, i.e. closed terms of the Herbrand universe. In the operational approach, the two clauses of append/3 are considered as a program. Properties of append/3 are proved with respect to a query evaluation procedure, say a Prolog compiler. Typical operational properties are: (1) If the query append(l1 , l2 , x) is called, where l1 is the list [a1 , . . . , am ] and l2 is the list [b1 , . . . , bn ] and x is a variable, then the query evaluation stops and returns the answer x = l3 , where l3 is the list [a1 , . . . , am , b1 , . . . , bn ]. (2) If the query append(x, y, l3 ) is called, where x and y are variables and l3 is the list [c1 , . . . , ck ], then the whole query evaluation tree is finite and the answers returned are of the form x = l1 and y = l2 , where l1 and l2 is a decomposition of the list l3 . Such properties are proved by induction on the length of the lists or by any methods. In this paper we show how one can prove operational properties of logic programs using the declarative approach. The trick is that the clauses of append/3 are translated into three positive elementary inductive definitions of three new relations appends , appendf and appendt which express success, finite failure and universal termination of append/3. What are the operational properties of append/3 that we can prove with declarative methods? We divide the properties into mode, type, termination, function and other 2

properties. The properties are expressed by formulas of a first-order language with syntactic operators S, F and T for success, failure and universal termination of queries. The operators are not modal operators. They are just abbreviations for other formulas (see Sect. 6). The properties of append/3 are: I. Mode properties of append/3: 1. ∀x, y, z (S append(x, y, z) ∧ gr(x) ∧ gr(y) → gr(z)), 2. ∀x, y, z (S append(x, y, z) ∧ gr(z) → gr(x) ∧ gr(y)). II. Type properties of append/3: 3. ∀x, y, z (S append(x, y, z) → S list(x)), 4. ∀x, y, z (S append(x, y, z) ∧ S list(y) → S list(z)), 5. ∀x, y, z (S append(x, y, z) ∧ S list(z) → S list(x) ∧ S list(y)). III. Termination properties of append/3: 6. ∀x, y, z (S list(x) → T append(x, y, z)), 7. ∀x, y, z (S list(z) → T append(x, y, z)). IV. Function properties of append/3: 8. ∀x, y (S list(x) → ∃!z S append(x, y, z)), 9. ∀x, y1 , y2 , z (S append(x, y1 , z) ∧ S append(x, y2 , z) → y1 = y2 ), 10. ∀x1 , x2 , y, z (S list(z) ∧ S append(x1 , y, z) ∧ S append(x2 , y, z) → x1 = x2 ). V. Other properties of append/3: 11. ∀l1 , l2 , l3 , x, y, z (S append(l1 , l2 , x) ∧ S append(x, l3 , z)∧ S append(l2 , l3 , y) → S append(l1 , y, z)), 12. ∀x, y, z (S append(x, y, z) ∧ S list(y) → lh(z) = lh(x) + lh(y)). The predicate ‘gr’ expresses that its argument is ground; lh(x) denotes the length of x provided that x is a list; the data type list is defined as follows: list([]). list([x|l]) :- list(l). In the declarative approach, we take the clauses of append/3 and list/1 plus the obvious clauses for nat/1 (natural numbers), add/3 (addition of natural numbers) and length/2 (length of a list) and prove in the inductive extension of these clauses properties 1–12. (The interested reader should try after reading Sect. 7 to prove the function property 10.) One of the main results of this article says then that, since, for example, the formula ∀x, y, z (S list(z) → T append(x, y, z)) is provable in the inductive extension, we can 3

conclude that the goal append(t1 , t2 , t3 ) terminates under depth-first evaluation for all terms t1 , t2 , t3 such that list(t3 ) succeeds. Moreover, if append(t1 , t2 , t3 ) succeeds and t3 is a ground term then it follows that t1 and t2 are also ground terms, etc. Thus we have a method to prove operational properties of a logic program in a declarative way. Thereby we obtain a declarative semantics for the the mode-, type- and determinism declarations of the new logic programming language Mercury [19]. There are well-established methods for proving properties of logic programs. There are methods for proving termination, there are methods for proving well-typedness etc. (cf. eg. [2, 4, 9, 16, 18]). Our approach, however, is different in two aspects. First, we have one single formal system in which we prove all the different properties of logic programs. Second, we prove the properties not on the operational level but on the declarative level. The main difference to [1, 10, 14] is that we use classical logic. There are several differences between this article and [13, 21]. In this article we use general goals and not only sequences of literals. This allows a uniform treatment of built-in predicates including the predicate call/n + 1. The notion of modes, mode-assignments and µ-correct programs of [21] are no longer needed. Instead of it we use the unary predicate ‘gr’ which expresses that a term is ground. This has the advantage that we can prove now the type-correctness of programs inside the theory. We can even handle so-called second order programs that use the built-in predicate call/n + 1. The claim that our methods can be applied to programs of practical interest can only be supported by examples. Based on the theoretical results of this article we have implemented an interactive theorem prover LPTP (Logic Program Theorem Prover). LPTP is still a prototype and it would be daring to say that LPTP is for Prolog what the Boyer-Moore theorem prover is for Lisp (cf. [5]). Although this article is on the theoretical foundations of LPTP, we list some details about the implementation: LPTP consists of 6500 lines of Prolog code. It is a light system. LPTP has been designed for correctness proofs of pure Prolog programs. Programs may contain negation, if-then-else and built-in predicates like is/2, | ⊥ | s = t | R(~t ) | ¬ϕ | ϕ ∧ ψ | ϕ ∨ ψ | ϕ → ψ | ∀x ϕ | ∃x ϕ ˆ We write s 6= t for ¬(s = t) and ϕ ↔ ψ for where R denotes any predicate symbol of L. (ϕ → ψ) ∧ (ψ → ϕ). The positive formulas of Lˆ are ϕ, χ, ψ ::= > | ⊥ | s = t | s 6= t | R(~t ) | ϕ ∧ ψ | ϕ ∨ ψ | ∀x ϕ | ∃x ϕ. Equations can occur negatively in positive formulas. But all the predicates Rs , Rf and Rt as well as ‘gr’ are only allowed to occur positively. The meaning of formulas is given by the first-order predicate calculus of classical logic. The meaning of goals will be explained below in terms of an operational semantics and ˆ later by a transformation of goals into formulas. By an L-theory we mean a (possibly ˆ ˆ infinite) collection T of L-formulas. We write T ` ϕ to express that the L-formula ϕ can ˆ be derived from the L-theory T by the usual rules of predicate logic with equality. Free and bound variables in formulas as well as in goals are defined as usual. We write G[~x ] and ϕ[~x ] to express that all free variables of G or ϕ are among the list ~x; G(~x ) and ϕ(~x ) may contain other free variables than ~x. If A is a user-defined atomic goal and G is a goal then the expression A :- G is called a clause with head A and body G. Sometimes, clauses have to be written in a special normal form. Let C be the the following clause: R(t1 [~y ], . . . , tn [~y ]) :- G[~y ]. 8

Then the definition form of C is defined as DC [x1 , . . . , xn ] :≡ some ~y (x1 = t1 [~y ] & . . . & xn = tn [~y ] & G[~y ]) and the normal form of C is the clause R(x1 , . . . , xn ) :- DC [x1 , . . . , xn ]. A logic program is a finite list of clauses. Let P be a program and R be a user-defined predicate symbol such that the clauses for R in P are C1 , . . . , Cm (in this order). Then the definition form of R with respect to P is defined as P DR [~x ] :≡ DC1 [~x ] or . . . or DCm [~x ] P and the normalized definition of R in P is the clause R(~x ) :- DR [~x ]. Both, the definition form of a clause and the definition form of a user-defined predicate are goals. Thus, from

a theoretical point of view, one could as well define a logic program to be a function that P assigns to every user-defined predicate symbol R a goal DR [~x ] for some distinguished

variables ~x.

3

Logical built-in predicates

Using the concept of goals, all the so-called logical built-in predicates can be treated in a uniform way. Without general goals, a theory of built-in predicates would be rather ad-hoc, since then every built-in predicate has to be treated in a different way. We assume that D is a set of built-in atomic goals and B is a a function from D into the set of goals such that the following two conditions are satisfied: (D) If A ∈ D then Aσ ∈ D for each substitution σ. (B) B(Aσ) ≡ B(A)σ for each A ∈ D and each substitution σ. The idea is that D contains exactly those built-in atomic goals that can be evaluated and do not report an error message because of type violations or insufficient instantiation of arguments. The goal B(A) is then the result of the evaluation of A. In most cases the goal B(A) is either ‘true’ or ‘fail’. In other cases B(A) can be an equation or a conjunction of equations. In some cases, like in the case of the predicate call/n + 1, B(A) may even be an atomic goal. [ We assume that the set D is given as a union D = {D(R) : R built-in}. Here are some examples.

D(integer/1) := {integer(t) : t is ground}

9

B(integer(t)) :=



true, if t is an integer constant; fail, otherwise.

D(is/2) := {t1 is t2 : t2 is a ground arithmetic expression} B(t1 is t2 ) := (t1 = n), where n is the value of t2 (as an integer constant). D(

S(G & H)

:≡ S G ∧ S H

S fail

:≡ ⊥

S(G or H) :≡ S G ∨ S H

S(s = t) :≡ (s = t)

S(some x G) :≡ ∃x S G

F R(~t )

:≡ Rf (~t )

F(not G)

:≡ S G

F true

:≡ ⊥

F(G & H)

:≡ F G ∨ F H

F fail

:≡ >

F(G or H) :≡ F G ∧ F H

F(s = t) :≡ ¬(s = t)

F(some x G) :≡ ∀x F G

T R(~t ) :≡ Rt (~t )

T(not G)

:≡ T G ∧ gr(G)

T true :≡ >

T(G & H)

:≡ T G ∧ (F G ∨ T H)

T fail :≡ >

T(G or H) :≡ T G ∧ T H

T(s = t) :≡ >

T(some x G) :≡ ∀x T G

Table 3: Operators for success, failure and termination. Since the proofs of this theorem is rather technical we postpone it into an appendix. In the following we will use the the calculus for signed queries only, and we will not refer to the stack-based, operational model. Note, that the set of safe queries is not recursively enumerable. Therefore there is no finitary calculus in which one can derive exactly the safe queries.

6

Syntactic operators for success, failure and termination

For the declarative semantics of logic programs we need three syntactic operators S, F ˆ and T which transform goals of the language L into positive L-formulas. S G is read: G succeeds; F G is read: G fails; T G is read: G terminates (and is safe). The operators S, F and T are not part of the language. They are defined notions. The operators are defined in Table 3. Special attention require the cases T(G & H), T(G or H) and T not G. The other cases are as one would expect. An immediate consequence of the definition of T(G & H)

15

is that T(E & (F & G)) is equivalent to T((E & F ) & G). This can be seen as follows: T(E & (F & G)) ↔ T E ∧ (F E ∨ T(F & G)) ↔ T E ∧ (F E ∨ (T F ∧ (F F ∨ T G))) ↔ T E ∧ (F E ∨ T F ) ∧ (F E ∨ F F ∨ T G) ↔ T(E & F ) ∧ (F(E & F ) ∨ T G) ↔ T((E & F ) & G) The definition of T(G or H) shows that termination has to be understood as universal termination. The goal G or H terminates only if both branches, G and H, terminate. Note, that F(s = t & G) is equivalent to s = t → F G and T(s = t & G) is equivalent to s = t → T G. The definition of T(not G) is the essential difference between the T operator here and the T (resp. L) operator in [20] and [21]. There, T(not G) is simply defined as T G. Here, we require in addition that G is ground using the operator ‘gr’ which is extended from terms to goals as follows: gr(true) :≡ >,

gr(G & H) :≡ gr(G) ∧ gr(H),

gr(fail) :≡ >,

gr(G or H) :≡ gr(G) ∧ gr(H),

gr(s = t) :≡ gr(s) ∧ gr(t),

gr(some x G) :≡ ∃x gr(G),

gr(R(t1 , . . . , tn )) :≡ gr(t1 ) ∧ . . . ∧ gr(tn ),

gr(not G) :≡ gr(G).

What we want is that for a goal G with free variables x1 , . . . , xn the following is true: (∗)

gr(G) ↔ gr(x1 ) ∧ . . . ∧ gr(xn ).

It is not possible to take this as a definition of gr(G) directly, since then we would loose the substitution property that (T G)σ ≡ T(Gσ) for each substitution σ. In the inductive extension of a logic program, however, (∗) will be provable.

7

The inductive extension of a logic program

The inductive extension of a logic program P is, roughly speaking, Clark’s completion (cf. [6]) of a logic program plus induction along the definition of the predicates. However, there are essential differences. For instance, the inductive extension is consistent for arbitrary programs. This is not the case for Clark’s completion. In the inductive extension it is also possible to prove termination of predicates. This cannot be done in Clark’s completion. The inductive extension of P , IND(P ), comprises the following axioms: I. The axioms of Clark’s equality theory CET: 16

1. f (x1 , . . . , xm ) = f (y1 , . . . , ym ) → xi = yi 2. f (x1 , . . . , xm ) 6= g(y1 , . . . , yn )

[if f is m-ary and 1 ≤ i ≤ m] [if f is m-ary, g is n-ary and f 6≡ g]

3. t 6= x

[if x occurs in t and t 6≡ x]

II. Axioms for ‘gr’: 4. gr(c)

[if c is a constant]

5. gr(x1 ) ∧ . . . ∧ gr(xm ) ↔ gr(f (x1 , . . . , xm ))

[if f is m-ary]

III. Uniqueness axioms (UNI): 6. ¬(Rs (~x ) ∧ Rf (~x )) IV. Totality axioms (TOT): 7. Rt (~x ) → Rs (~x ) ∨ Rf (~x ) V. Fixed point axioms for user-defined predicates R: P 8. S DR [~x ] ↔ Rs (~x ),

P F DR [~x ] ↔ Rf (~x ),

P T DR [~x ] ↔ Rt (~x )

VI. Fixed point axioms for built-in, atomic goals A ∈ D: 9. S B(A) ↔ S A,

F B(A) ↔ F A,

T B(A) ↔ T A.

VII. True axioms for built-in predicates. VIII. The simultaneous induction scheme for user-defined predicates: ˆ Let R1 , . . . , Rn be user-defined predicates and let ϕ1 (~x1 ), . . . , ϕn (~xn ) be L-formulas such that the length of ~xi is equal to the arity of Ri for i = 1, . . . , n. Let closed(ϕ1 (~x1 )/R1 , . . . , ϕn (~xn )/Rn ) be the formula obtained from P P ∀~x1 (S DR [~x1 ] → R1s (~x1 )) ∧ . . . ∧ ∀~xn (S DR [~xn ] → Rns (~xn )) n 1

by replacing simultaneously all occurrences of Ri (~t ) by ϕi (~t ) for i = 1, . . . , n and renaming the bound variables when necessary. Let sub(ϕ1 (~x1 )/R1 , . . . , ϕn (~xn )/Rn ) be the formula ∀~x1 (R1s (~x1 ) → ϕ1 (~x1 )) ∧ . . . ∧ ∀~xn (Rns (~xn ) → ϕn (~xn )). Then the simultaneous induction axiom is the following formula: 10. closed(ϕ1 (~x1 )/R1 , . . . , ϕn (~xn )/Rn ) → sub(ϕ1 (~x1 )/R1 , . . . , ϕn (~xn )/Rn ).

17

We briefly discuss the axioms of IND(P ): I. Clark’s equality theory CET is needed for the formalization of unification. Let E := {s1 = t1 , . . . , sn = tn } and ϕE :≡ (s1 = t1 ∧ . . . ∧ sn = tn ). If E is unifiable and aσ ≡ bσ for a most general unifier σ of E, then CET proves ϕE → a = b. If E is not unifiable, then CET proves ¬ϕE . If σ is an idempotent most general unifier of E, then CET proves ϕE → (ψ ↔ ψσ) for arbitrary formulas ψ. II. The predicate ‘gr’ is used to say that a term is ground. We will see below that, if gr(t) is provable from IND(P ), then t is ground. We assume that the language L contains at least one constant symbol. III. From the uniqueness axioms (UNI) one can immediately derive ¬(S G ∧ F G) for arbitrary goals G. IV. From the totality axioms (TOT) one can derive T G → S G ∨ F G for each goal G. V. The fixed point axioms for user defined-predicates say that one can read a clause both, from body to head, but also from head to body. VI. In the fixed point axioms for built-in predicates it is important that A belongs to D. Otherwise, B(A) is not defined. Note, that if s and t are two terms such that R(s) ∈ D and R(t) ∈ D then CET proves s = t → [O B(R(s)) ↔ O B(R(t))] for O ∈ {S, F, T}. If s and t are not unifiable then this is trivial. Otherwise let σ = mgu(s, t). Since (O B(R(s)))σ is the same as O B(R(sσ)), by I, we obtain that CET proves s = t → [O B(R(s)) ↔ O B(R(sσ))]. We also have that CET proves s = t → [O B(R(t)) ↔ O B(R(tσ))]. Thus the claim follows, since sσ ≡ tσ. VII. We will explain in Definition 8.8.4 in the next section, after introducing the operator ΓP , what we mean by true axioms for built-in predicates. For example, the following axioms are true: (1) ∀x1 , x2 , y (S x1 is y ∧ S x2 is y → x1 = x2 ). (2) ∀x (gr(x) ↔ T integer(x)). (3) ∀x(S integer(x) → F x < x). (4) ∀x1 , x2 , y1 , y2 (S x1 is y1 ∧ S x2 is y2 → (S x1 < x2 ↔ S y1 < y2 )). (5) ∀x, y, z(S integer list([x, y, z]) ∧ S x < y ∧ S y < z → S x < z). 18

Note, that axioms like x = 7 ↔ S(x is 3 + 4) are included in the fixed point axioms VI. VIII. The simultaneous induction scheme expresses the minimality of the Rs prediP cates. Note, that the formulas S DR are positive. Informally, the induction scheme says

that one can use induction along the definition of the predicates. For the append/3 and the list/1 predicate we have the following rules: ∀l ϕ([], l, l) ∀x, l1 , l2 , l3 (S append(l1 , l2 , l3 ) ∧ ϕ(l1 , l2 , l3 ) → ϕ([x|l1 ], l2 , [x|l3 ])) ∀l1 , l2 , l3 (S append(l1 , l2 , l3 ) → ϕ(l1 , l2 , l3 )) ϕ([]) ∀x, l (S list(l) ∧ ϕ(l) → ϕ([x|l])) ∀l (S list(l) → ϕ(l)) Sometimes the induction rule for append/3 is called computational induction and the rule for list/1 is called structural induction. Another form of induction is induction on the universe. This form of induction, however, is not sound, as the following example shows. Example 7.1 Assume that the language L has exactly one constant symbol c and one unary function symbol f . In this case, induction on the universe is the scheme ϕ(c) ∧ ∀x (ϕ(x) → ϕ(f (x)) → ∀x ϕ(x).

(∗∗)

Let P be the program with the two clauses q :- r(x) and r(f (x)) :- r(x). Using induction on the universe (∗∗) for ϕ(x) :≡ T r(x) and the fixed point axioms ∀x T r(x) ↔ T q

and ∀y(x = f (y) → T r(y)) ↔ T r(x)

one can easily derive ∀x T r(x) and hence T q. But the goal q does not terminate under query evaluation. Therefore, induction on the universe is unsound for our purposes. We want that T G is provable if and only if G terminates. This examples also shows that we cannot restrict the semantics to Herbrand interpretations only, since for Herbrand interpretations induction on the universe is a valid principle. Now we state the two main theorems that relate the inductive extension of logic programs to the Prolog query evaluation procedure. The next two theorems say that the first-order theory IND(P ) is adequate for proving properties of logic programs. The first theorem says that the Prolog query evaluation procedure can be interpreted in IND(P ). For this interpretation the full power of the inductive extension is not used. Only CET and the directions from left to right in the fixed point axioms are needed. Theorem 7.2 Let Q be a query. (1) If Q terminates then IND(P ) ` T Q. 19

(2) If Q succeeds with answer σ then IND(P ) ` S Qσ. (3) If Q fails then IND(P ) ` F Q. Proof. By Theorem 5.1, it suffices to show the following: (1) If P `` T : Q then IND(P ) ` T Q. (2) If P `` Y : Q then IND(P ) ` S Q. (3) If P `` N : Q then IND(P ) ` F Q. We prove these statements by induction on the length of a derivation in the calculus for signed queries. We consider some interesting cases. Note that O(G & true) is equivalent to O G for O ∈ {S, F, T}. Assume that P `` T : s = t & Q. By the induction hypothesis, we obtain that T Qσ is provable in IND(P ) for each substitution σ such that sσ ≡ tσ. We have to show that T(s = t & Q) is derivable in IND(P ). Since T(s = t & Q) is equivalent to s = t → T Q, we have to show that IND(P ) proves s = t → T Q. If s and t are not unifiable, then ¬(s = t) is derivable in CET and we are done. Otherwise, let σ be an idempotent most general unifier of s and t. By assumption, since sσ ≡ tσ, we know that T Qσ is derivable in IND(P ). Since CET proves s = t ∧ T Qσ → T Q, we are done. Assume that G is ground and that the signed query T : (not G) & Q is derived from the premises T : G & true and Y : G & true. By the induction hypothesis, we obtain that the formulas T G and S G are derivable in IND(P ). We want to show that T((not G) & Q) is derivable as well. Since G is ground, the formula gr(G) is provable in IND(P ). Since T(not G) is defined as T G∧gr(G), we obtain that T(not G) is provable in IND(P ). Since F(not G) is defined as S G, we obtain that IND(P ) ` T(not G) ∧ (F(not G) ∨ T Q). This is exactly the formula T((not G) & Q) and we are done.  The main theorem says that the theorems we can derive in IND(P ) are true under the procedural interpretation. For example, if the formula T Q is provable, then the query Q terminates. Main Theorem 7.3 Let Q be a query. (1) If IND(P ) ` T Q then Q terminates. (2) If IND(P ) ` T Q ∧ S Qσ then Q succeeds with answer including σ. (3) If IND(P ) ` T Q ∧ F Q then Q fails. The rest of this paper deals with the proof of this theorem. Note, that the theorem implies, for example, the following existence property: 20

Corollary 7.4 If IND(P ) ` S(some x G[x]) ∧ T(some x G[x]) then there exists a term t such that the goal G[x] succeeds with answer {t/x} and IND(P ) ` S G[t]. It is important to note, that from the provability of T Q if follows not only that all computations for Q terminate but also that there are no errors in calls of built-in predicates during the computation. There is an interesting analogy between the T operator and the logic of partial terms (cf. eg. [11, 12]). In the logic of partial terms the expression t ↓ means that the functional program t terminates and that during the evaluation there are no type conflicts, i.e. the program is dynamically well-typed. The meaning of T Q is similar. It means that the evaluation of the goal Q terminates and that there are no error messages caused by non-ground negative goals or wrongly typed built-in atomic goals.

8

Models of the inductive extension

Models of the inductive extension of a logic program can be constructed by iterating ˆ ˆ a monotonic operator on L-structures. An L-structure A is given by a non-empty set n |A|, a relation A(R) ⊆ |A| for each n-ary predicate symbol R of Lˆ and a function A(f ): |A|n → |A| for each n-ary function symbol f . We consider the subrelation ordering ˆ between L-structures. The notion A ≤ B means that (1) |A| = |B|, (2) A(f ) = B(f ) for each function symbol and ˆ (3) A(R) ⊆ B(R) for each predicate symbol R of L. Positive formulas are monotonic with respect to this ordering. If ϕ[~x ] is a positive formula, A ≤ B, ~a ∈ |A| and A |= ϕ[~a ], then also B |= ϕ[~a ]. ˆ ˆ The operator ΓP assigns to each L-structure A a new L-structure ΓP (A) such that |ΓP (A)| = |A| and ΓP (A)(f ) = A(f ) for each function symbol f . The extensions of the predicates in ΓP (A) are defined as follows (we write S R for Rs , F R for Rf , T R for Rt ): P ΓP (A)(O R) := {h~a i ∈ |A|n : A |= O DR [~a ]},

ΓP (A)(O R) := {h~a i ∈ |A|n : there exist terms ~t [~x ] and ~b ∈ |A| such that ~a = ~t [~b ], R(~t [~x ]) ∈ D, B(R(~t [~x ])) = G[~x ] and A |= O G[~b ]}, ΓP (A)(gr)

:= {a ∈ |A| : there exists a closed term t with a = tA }.

In this definition O ranges over {S, F, T}. In the first line R is an n-ary user-defined predicate; in the second line R is an n-ary built-in predicate. ˆ Lemma 8.1 Let A and B be L-structures. 21

(1) If A ≤ B then ΓP (A) ≤ ΓP (B). (2) If A satisfies CET and UNI, then ΓP (A) satisfies UNI. (3) If A satisfies TOT, then ΓP (A)satisfies TOT. (4) If A satisfies CET and ΓP (A) = A then A satisfies the fixed point axioms. The definition of the stages of the operator ΓP is canonical (cf. [17]). ˆ Definition 8.2 Let A be an L-structure that satisfies CET. Assume that all predicates are empty in A. Then one defines for ordinal numbers α the stages IαP,A in the following way: I0P,A := A,

P,A Iα+1 := ΓP (IαP,A ),

IλP,A :=

G

IαP,A ,

α 0 and A ∈ D, (10) (11) (12) (13)

[if A is built-in]

JPn+1 |= O A ⇐⇒ JPn |= O B(A), JPn 6|= S G ∧ F G, JPn |= T G =⇒ JPn |= S G ∨ F G, JPn |= gr(G) =⇒ G is ground.

[if A ∈ D]

The finite stages (JPn )n 0 and T-rk(DR [t ] & Q, hm − 1, ~n i), P ~ (8) S-rk(R(~t ) & Q, hm + 1, ~n i) =⇒ S-rk(DR [t ] & Q, hm, ~n i), P ~ (9) F-rk(R(~t ) & Q, hm + 1, ~n i) =⇒ F-rk(DR [t ] & Q, hm, ~n i),

(10) T-rk(A & Q, hm, ~n i) =⇒ m > 0, A ∈ D and T-rk(B(A) & Q, hm − 1, ~n i), (11) S-rk(A & Q, hm + 1, ~n i), A ∈ D =⇒ S-rk(B(A) & Q, hm, ~n i), (12) F-rk(A & Q, hm + 1, ~n i), A ∈ D =⇒ F-rk(B(A) & Q, hm, ~n i), (13) T-rk((G & H) & Q, hm, ~n i) =⇒ T-rk(G & (H & Q), hm, m, ~n i), (14) S-rk((G & H) & Q, hm, ~n i) =⇒ S-rk(G & (H & Q), hm, m, ~n i), (15) F-rk((G & H) & Q, hm, ~n i) =⇒ F-rk(G & (H & Q), hm, m, ~n i), (16) T-rk((G or H) & Q, hm, ~n i) =⇒ T-rk(G & Q, hm, ~n i) and T-rk(H & Q, hm, ~n i), (17) S-rk((G or H) & Q, hm, ~n i) =⇒ S-rk(G & Q, hm, ~n i) or S-rk(H & Q, hm, ~n i), (18) F-rk((G or H) & Q, hm, ~n i) =⇒ F-rk(G & Q, hm, ~n i) and F-rk(H & Q, hm, ~n i), (19) T-rk((some x G) & Q, hm, ~n i) =⇒ T-rk(G{t/x} & Q, hm, ~n i) for all terms t, (20) S-rk((some x G) & Q, hm, ~n i) =⇒ there is a term t with S-rk(G{t/x} & Q, hm, ~n i), (21) F-rk((some x G) & Q, hm, ~n i) =⇒ F-rk(G{t/x} & Q, hm, ~n i) for all terms t, (22) T-rk((not G) & Q, hm, ~n i) =⇒ G is ground and T-rk([G], hmi), (23) G is ground, T-rk([G], hmi) =⇒ S-rk([G], hmi) or F-rk([G], hmi), (24) T-rk((not G) & Q, hm, ~n i), F-rk([G], hmi) =⇒ T-rk(Q, h~n i), (25) S-rk((not G) & Q, hm, ~n i), G ground =⇒ F-rk([G], hmi) and S-rk(Q, h~n i), (26) F-rk((not G) & Q, hm, ~n i), F-rk([G], hmi) =⇒ F-rk(Q, h~n i). The function ord has been defined in such a way that whenever we have an implication X-rk(G, hm ~ i) =⇒ Y-rk(H, h~n i) then ord(H, h~n i) < ord(G, hm ~ i). Therefore it is easy to 28

see that the induction goes through. The induction hypothesis is that for all Q and hm ~ i, if ord(Q, hm ~ i) < α and T-rk(Q, hm ~ i) then (a) P `` T : Q, (b) if S-rk(Q, hm ~ i) then P `` Y : Q, (c) if F-rk(Q, hm ~ i) then P `` N : Q. Assume that ord(Q, hm ~ i) = α and T-rk(Q, hm ~ i). We have to show (a), (b) and (c). Consider, for example, the case s = t & Q with T-rk(s = t & Q, hm, ~n i). (a) We have to show that P `` T : Qσ for each substitution σ with sσ ≡ tσ. According to the rules of the calculus for signed queries in Table 2 it follows then that P `` T : s = t & Q. Therefore, assume that sσ ≡ tσ. By 4, we obtain T-rk(Qσ, h~n i). Since ord(Qσ, h~n i) < ord(s = t & Q, hm, ~n i), we can apply the induction hypothesis and obtain that P `` T : Qσ. (b) Assume S-rk(s = t & Q, hm, ~n i). It suffices to show that s is identical to t and that P `` Y : Q. By 5, we obtain s ≡ t and S-rk(Q, h~n i). By 4, we obtain T-rk(Q, h~n i). Since ord(Q, h~n i) is less than ord(s = t & Q, hm, ~n i), we can apply the induction hypothesis and obtain that P `` Y : Q. (c) Assume F-rk(s = t & Q, hm, ~n i). We have to show that P `` N : Qσ for each substitution σ with sσ ≡ tσ. Therefore, assume that sσ ≡ tσ. By 6, we obtain F-rk(Qσ, h~n i) and, by 4, T-rk(Qσ, h~n i). Since ord(Qσ, h~n i) is less than ord(s = t & Q, hm, ~n i), we can apply the induction hypothesis and obtain that P `` N : Qσ. The other cases go in a similar way. The only critical case is (not G) & Q. Assume T-rk((not G) & Q, hm, ~n i). (a) By 22, it follows that G is ground and T-rk([G], hmi). Since ord([G], hmi) is less than ord((not G) & Q, hm, ~n i), we can apply the induction hypothesis and obtain that P `` T : [G]. By 23, we obtain S-rk([G], hmi) or F-rk([G], hmi). If S-rk([G], hmi) then, by the induction hypothesis, we obtain that P `` Y : [G] and therefore P `` T : (not G) & Q. Otherwise, if F-rk([G], hmi), then by 24, we obtain T-rk(Q, h~n i). Since ord(Q, h~n i) is less than ord((not G) & Q, hm, ~n i), we can apply the induction hypothesis and obtain that P `` T : Q and therefore P `` T : (not G) & Q. (b) Assume S-rk((not G) & Q, hm, ~n i). From (a) we know that the goal G is ground. By 25, we obtain F-rk([G], hmi) and S-rk(Q, h~n i). From (a) we still have T-rk([G], hmi) and, by 24, we obtain T-rk(Q, h~n i). By the induction hypothesis, we obtain P `` N : [G] and P `` Y : Q. Thus P `` Y : (not G) & Q. (c) Assume F-rk((not G) & Q, hm, ~n i). From (a) we still have T-rk([G], hmi) and S-rk([G], hmi) or F-rk([G], hmi). If S-rk([G], hmi) then, by the induction hypothesis, P `` Y : [G] and thus P `` N : (not G) & Q. Otherwise, if F-rk([G], hmi), then by the induction 29

hypothesis P `` N : [G]. By 24 and 26, T-rk(Q, h~n i) and F-rk(Q, h~n i). By the induction hypothesis, we obtain P `` N : Q and thus P `` N : (not G) & Q.  The Main Theorem 7.3 now follows from Theorem 8.7, Theorem 5.1 and the previous lemma. Proof. (Main Theorem 7.3) Assume that IND(P ) ` T Q. By Theorem 8.7 on the finite stages, there exists an n < ω such that JPn |= ∀(T Q). Lemma 10.2 says, that P `` T : Q. From Theorem 5.1 we obtain that Q terminates. Assume that IND(P ) ` T Q ∧ S Qσ. As in the previous case we obtain an n < ω such that JPn |= ∀(T Q) and JPn |= S Qσ. By Lemma 10.2 it follows that P `` T : Q and that P `` Y : Qσ. By Theorem 5.1, we obtain that Q succeeds with answer including σ. Assume that IND(P ) ` T Q∧F Q. Again, we obtain an n < ω such that JPn |= ∀(T Q) and JPn |= ∀(F Q). By Lemma 10.2 it follows that P `` T : Q and that P `` N : Q. By Theorem 5.1, we obtain that Q fails. 

11

Appendix

In this appendix we sketch a proof of Theorem 5.1, i.e. the relation between the query evaluation procedure of Table 1 and the calculus for signed queries of Table 2. First we state some elementary properties of the calculus for signed queries. Lemma 11.1 Let Q be a query. (1) If P `` Y : Q then P `` Y : Qσ. (2) If P `` N : Q then P `` N : Qσ. (3) If P `` T : Q then P `` T : Qσ. Proof. By induction on the length of a derivation of Y : Q, N : Q or T : Q in the calculus for signed queries.  Lemma 11.2 It is not possible that P `` Y : Q and P `` N : Q. Proof. By induction on the the length of a derivation of Y : Q and N : Q.  In Table 1 (transition rules of the query evaluation procedure) we treat environments and substitutions differently. We consider environments as special representations of substitutions. The reason that we represent substitutions in this way is that in computations one has to keep track of the variables that have already occurred. We do this by adding bindings x/x to the current environment. 30

Definition 11.3 Let η be the environment {t1 /x1 , . . . , tn /xn }. Then we define: (1) def(η) := {x1 , . . . , xn }, (2) dom(η) := {x ∈ def(η) : xη 6≡ x}, [ (3) vars(η) := dom(η) ∪ {vars(xη) : x ∈ dom(η)}. The application of η to a goal G (resp. term t) is written as Gη (resp. tη). It is obtained from G (resp. t) by simultaneously replacing the variables xi by the term ti for i = 1, . . . , n. The composition of η with an environment θ is the environment ηθ := {t1 θ/x1 , . . . , tn θ/xn } ∪ {xθ/x : x ∈ def(θ) \ def(η)}. Thus def(ηθ) = def(η) ∪ def(θ). The environment η is called idempotent if ηη = η. Equivalently, that holds if xi does not occur in t1 , . . . , tn for each i = 1, . . . , n such that xi 6≡ ti . Note that η = θ iff def(η) = def(θ) and tη ≡ tθ for all terms t. Lemma 11.4 If σ is an idempotent most general unifier of s and t then vars(σ) is contained in vars(s) ∪ vars(t). Proof. Assume that x ∈ dom(σ) \ [vars(s) ∪ vars(t)]. Let τ := {t/y ∈ σ : x 6≡ y}. Then τ is a unifier of s and t and therefore τ = στ . We have xτ ≡ x, but xστ ≡ xσ 6≡ x. Contradiction. Thus we can assume that dom(σ) ⊆ vars(s) ∪ vars(t). Assume now that x ∈ dom(σ) and y is a variable that occurs in xσ but not in s or t. Let z1 and z2 be two different variables which do not belong to vars(σ) nor s nor t. Let τ := {z1 /y}σ{z2 /y}. Then τ is a unifier of s and t and therefore τ = στ . Let r :≡ xσ. We have xτ ≡ x{z1 /y}σ{z2 /y} ≡ r{z2 /y}, since x 6≡ y, and xστ ≡ rτ ≡ r{z1 /y}. This is a contradiction, since y occurs in r and therefore r{z2 /y} 6≡ r{z1 /y}.  In the next lemma we assume that the function mgu(s, t) returns an idempotent most general unifier if s and t are unifiable. Lemma 11.5 If η is idempotent and τ = mgu(sη, tη) then ητ is also idempotent. Proof. By the previous lemma we obtain that vars(τ ) ⊆ vars(sη) ∪ vars(tη). Since η is idempotent, it follows that vars(τ ) ∩ dom(η) = ∅. This implies that τ η = τ ∪ η and η(τ ∪ η) = ητ . Thus we obtain (ητ )(ητ ) = η(τ η)τ = η(τ ∪ η)τ = ητ τ = ητ .  The next lemma says that during a computation all variables that occur in a frame are always defined in the environment of the frame. Lemma 11.6 If Σ hΦ; Q, ηi is a state in a computation starting from some initial state init(G) then η is idempotent, FV(Q) ⊆ def(η) and vars(xη) ⊆ def(η) for all x ∈ def(η). 31

During a computation the stacks grow and shrink. New frames are added and eventually removed. The next definition expresses that a computation does not go below a certain level, i.e. does not change a certain initial segment of the state. Definition 11.7 We write Σ1 .∗ Σ2 mod Σ hΦi if there exists a computation from the state Σ1 into the state Σ2 such that all intermediate states including the first state but not the last state are of the form Σ hΦ; Θi Σ0 with Θ non-empty. In computations new bindings are composed to the current environment in unification steps. The next lemma says that these bindings do not act on variables that are defined in the current environment but are not used in the current query. Lemma 11.8 If Σ hΦ; Q, ηi .∗ Σ hΦ; Ψ; G, τ i mod Σ hΦi, then there exists a θ such that ηθ = τ and vars(θ) is disjoint from def(η) \ FV(Qη). In the next definition the notions ‘a query succeeds with an answer’ and ‘a query fails’ are generalized. Definition 11.9 We say that (1) a frame hQ, ηi returns τ modulo Σ hΦi, if there exists a Θ such that Σ hΦ; Q, ηi .∗ Σ hΦ; Θ; true, τ i mod Σ hΦi; (2) a frame hQ, ηi fails modulo Σ hΦi, if Σ hΦ; Q, ηi .∗ Σ hΦi mod Σ hΦi. As special cases we have the following: (1) A query Q succeeds with answer σ iff there exists a τ such that init(Q) returns τ modulo hi and Qσ ≡ Qτ . (2) A query Q fails iff init(Q) fails modulo hi. Lemma 11.10 Let Q be a query. (1) If hQ, ηi returns τ modulo Σ hΦi, then P `` Y : Qτ . (2) If hQ, ηi fails modulo Σ hΦi, then P `` N : Qη. Proof. By induction on the length of the computation.  Corollary 11.11 Let Q be a query. (1) If Q succeeds with answer σ, then P `` Y : Qσ. (2) If Q fails, then P `` N : Q. The notion ‘a query terminates’ is generalized to ‘a state is terminating’. 32

Definition 11.12 A state Σ is called terminating if all computations starting from Σ are finite and do not end in error. Assume that Σ is terminating. Since the tree of states reachable from Σ is finitely branching, it follows by K¨onig’s Lemma that the tree is finite. We call the depth of the tree the rank of Σ. Lemma 11.13 If Σ hΦ; Q, ηi is a terminating state then hQ, ηi fails modulo Σ hΦi or returns some answer τ modulo Σ hΦi. Proof. By induction on the rank of the terminating state.  Lemma 11.14 If Σ hΦ; Q, ηi is terminating then P `` T : Qη. Proof. By induction on the rank of the terminating state. In the case, where the first goal of Q is negated, Lemma 11.13 is used.  Since a query Q terminates iff the state init(Q) is terminating, we obtain the following corollary. Corollary 11.15 If Q terminates then P `` T : Q. Definition 11.16 We write G ≤ H, if there exists a substitution θ such that Gθ ≡ H. Lemma 11.17 If Hσ ≤ Hτ and vars(σ) is disjoint from FV(G) \ FV(H), then we have Gσ ≤ Gτ . Proof. Assume that Hσα ≡ Hτ . Let β := α|FV(Hσ) ∪ τ |(FV(G) \ FV(H)). Then Gσβ ≡ Gτ .  Definition 11.18 A state Σ is called safe if there is no computation starting from Σ that ends in error. The next lemma is a generalized lifting lemma. Lemma 11.19 Let Q be a query. (1) If P `` Y : Qηθ then, for all Σ and Φ such that Σ hΦ; Q, ηi is safe, the frame hQ, ηi returns an answer τ modulo Σ hΦi such that Qτ ≤ Qηθ. (2) If P `` N : Qη then, for all Σ and Φ such that Σ hΦ; Q, ηi is safe, the frame hQ, ηi fails modulo Σ hΦi. 33

Proof. By induction on the length of a derivation of Y : Qηθ or N : Qη. The following is used several times. Assume that (a) vars(Gη) ⊆ def(η), (b) hH, ηi returns τ modulo Σ hΦi, (c) Hτ ≤ Hηθ. Then Gτ ≤ Gηθ. This can be seen as follows. By Lemma 11.8, there exists a τ 0 such that τ = ητ 0 and vars(τ 0 ) is disjoint from def(η)\FV(Hη). Since τ = ητ 0 we have Hητ 0 ≤ Hηθ. Since vars(Gη) is contained in def(η), vars(τ 0 ) is disjoint from FV(Gη) \ FV(Hη). By Lemma 11.17, we obtain that Gητ 0 ≤ Gηθ and thus Gτ ≤ Gηθ.  Lemma 11.20 If P `` T : Qη then for all computations Σ0 , Σ1 , . . . starting from the state Σ hΦ; Q, ηi there exist an n such that Σn is of the form Σ yes(τ ) or Σ hΦi. Proof. By induction on the length of a derivation of T : Q. In the case, where the first goal of Q is negated, Lemma 11.2 is used.  Corollary 11.21 If P `` T : Q then Q terminates and is safe. Proof. Assume that P `` T : Q. By the previous lemma it follows that all computations starting with init(Q) are finite end in yes(τ ) or no. Therefore Q terminates and is safe.  Corollary 11.22 Let Q be a query. (1) If P `` Y : Qθ and P `` T : Q then Q succeeds with answer including θ. (2) If P `` N : Q and P `` T : Q then Q fails. Proof. Assume that P `` T : Q. By the previous corollary it follows that init(Q) is a safe state. The claim follows now by Lemma 11.19. 

12

Conclusion

We have presented a framework in which one can prove properties of pure Prolog-like logic programs in classical first-order logic. The properties include simple properties like modes and type properties but also more complicated properties like termination and equivalence of predicates. The programs are purely declarative but contain negationas-failure and so-called logical built-in predicates. These are predicates that are stable under substitutions. They include, for example, the standard arithmetic predicates but also predicates like call/n + 1 which sometimes are not considered as logical. 34

The programs do not contain assert/1 and retract/1 or the cut (!) operator. The reason that our theory cannot be extended to assert/1 and retract/1 is simple. Using these predicates it is possible to write self-modifying programs which add and remove clauses to the program at run-time. Our approach, however, is static. We transform the predicates of a program into monotonic inductive definitions such that we can use induction along this definitions to prove properties of the programs. This is not possible with self-modifying programs. Programs with cut cannot be handled, since in general they do not have the lifting property. This property, however, is implied by our semantics. Consider, for example, the following program, where c and d are two different constants: r(c) :- ! . r(d). This simple program does not have the lifting property. Take the goal r(d). It succeeds with answer yes. But the more general goal r(x) has answer x = c only and no answer which is more general than x = d.

References [1] J. Andrews. A logical semantics for depth-first Prolog with ground negation. Theoretical Computer Science, 184(1,2):105–143, 1997. [2] K. R. Apt and E. Marchiori. Reasoning about Prolog programs: from modes through types to assertions. Formal Aspects of Computing, 6(6A):743–765, 1994. [3] K. R. Apt and D. Pedreschi. Reasoning about termination of pure Prolog programs. Information and Computation, 106(1):109–157, 1993. [4] M. Baudinet. Proving termination properties of Prolog programs: a semantic approach. In Proceedings of the Third Annual IEEE Symposium on Logic in Computer Science, LICS ’88, pages 336–347, Edinburgh, Scotland, 1988. IEEE Computer Society Press. [5] R. S. Boyer and J. S. Moore. A Computational Logic Handbook. Academic Press, 1988. [6] K. L. Clark. Negation as failure. In H. Gallaire and J. Minker, editors, Logic and Data Bases, pages 293–322. Plenum Press, New York, 1978. [7] S. K. Debray and P. Mishra. Denotational and operational semantics for Prolog. J. of Logic Programming, 5(1):61–91, 1988. [8] K. Doets. From Logic to Logic Programming. The MIT Press, 1994. [9] W. Drabent and J. Maluszy´ nski. Inductive assertion method for logic programs. Theoretical Computer Science, 59:133–155, 1988. [10] B. Elbl. Deklarative Semantik von Logikprogrammen mit PROLOGs Auswertungsstrategie. PhD thesis, Universit¨at der Bundeswehr, M¨ unchen, Germany, 1994.

35

[11] S. Feferman. Logics for termination and correctness of functional programs. In Y. N. Moschovakis, editor, Logic from Computer Science, pages 95–127, New York, 1992. Springer-Verlag. [12] S. Feferman. Logics for termination and correctness of functional programs, II. Logics of strength PRA. In P. Aczel, H. Simmons, and S. S. Wainer, editors, Proof Theory, pages 195–225. Cambridge University Press, 1992. [13] G. J¨ager and R. F. St¨ark. A proof-theoretic framework for logic programming. In S. R. Buss, editor, Handbook of Proof Theory, pages 639–682. Elsevier, 1998. [14] M. Kalsbeek. Meta Logics for Logic Programming. PhD thesis, University of Amsterdam, 1995. ILLC Dissertation Series 1995-13. [15] K. Kunen. Negation in logic programming. J. of Logic Programming, 4(4):289–308, 1987. [16] L. T. McCarty. Proving inductive properties of Prolog programs in second-order intuitionistic logic. In D. S. Warren, editor, Proceedings of the 10th International Conference on Logic Programming, pages 44–63, Budapest, 1993. The MIT Press. [17] Y. N. Moschovakis. Elementary Induction on Abstract Structures. North-Holland, Amsterdam, 1974. [18] F. Pfenning, editor. Types in Logic Programming. The MIT Press, Cambridge, 1992. [19] Z. Somogyi, F. Henderson, and T. Conway. The execution algorithm of Mercury, an efficient purely declarative logic programming language. J. of Logic Programming, 29(1–3):17–64, 1996. [20] R. F. St¨ ark. The declarative semantics of the Prolog selection rule. In Proceedings of the Ninth Annual IEEE Symposium on Logic in Computer Science, LICS ’94, pages 252–261, Paris, France, July 1994. IEEE Computer Society Press. [21] R. F. St¨ark. First-order theories for pure Prolog programs with negation. Archive for Mathematical Logic, 34(2):113–144, 1995. [22] R. F. St¨ ark. Why the constant ‘undefined’ ? Logics of partial terms for strict and non-strict functional programming languages. J. of Functional Programming, 8(2):97–129, 1998.

36