A Linear-Logic Semantics for Constraint Handling Rules With

Constraint Handling Rules With Disjunction (CHR∨)[2] is an extension of Constraint ... example program would then map to the following logical formula. ! (bird albatross .... Propagate rules express the addition of a goal under certain con-.
249KB taille 3 téléchargements 272 vues
A Linear-Logic Semantics for Constraint Handling Rules With Disjunction Hariolf Betz, Thom Fr¨ uhwirth Department of Computer Science, University of Ulm

Abstract. We motivate and develop a linear logic declarative semantics for CHR∨ , an extension of the CHR programming language that integrates concurrent committed choice with backtrack search and a predefined underlying constraint handler. We show that our semantics maps each of these aspects of the language to a distinct aspect of linear logic. We show how we can use this semantics to reason about derivations in CHR∨ and we present strong theorems concerning its soundness and completeness.

1

Introduction

A declarative semantics is a highly desirable property for a programming language. It allows to prove various program properties – foremost correctness –, guarantees platform independence and eliminates various sources of error. Furthermore, declarative programs tend to be shorter and clearer as they contain – in the ideal case – only information about the modeled problem and none about control. Constraint Handling Rules With Disjunction (CHR∨ )[2] is an extension of Constraint Handling Rules (CHR)[10–13]. CHR∨ is a multi-paradigm logical programming language that seamlessly integrates a predefined underlying constraint solver with the forward reasoning – as inherited from pure CHR – and backtrack search functionality, which allows for a seamless enbedding of Prolog into CHR∨ . As Abdennadher suggested [3], it is therefore as much interesting as a language to reason about declarative paradigms as it is in its own right as a programming language. The following simple example program expresses in the first line that a bird might either be an albatross or a penguin. In the second line, an integrity constraint is given, stating that a penguin cannot fly. Obviously, for an input bird ∧ flies, the result must be albatross ∧ flies. bird ⇔ albatross ∨ penguin. penguin, flies ⇔ false. Owing to its predecessors in logic and constraint logic programming, both CHR and CHR∨ feature a declarative semantics in classical logic. However, we have shown in previous work [6] that the classical declarative

semantics of CHR reflects the functionality of the program poorly for certain classes of programs1 . In this paper, we extend this linear logic semantics to CHR∨ . Our first example program would then map to the following logical formula. ! (bird ( albatross ⊕ penguin) ⊗ ! (penguin ⊗ f lies ( 0) This interpretation is indeed faithful to the operational semantics of CHR∨ as it logically implies the following formula: bird ⊗ f lies ( albatross ⊗ f lies A conjunction of bird and flies can be mapped to a conjunction of albatross and flies. The declarative semantics that we propose preserves the clear distinction between don’t-care and don’t-know nondeterminism in CHR∨ by mapping it to the dualism of internal and external choice in linear logic. While forward reasoning formalisms have seen semantics in linear logic before – such as the π-calculus [18] and the CC/LCC class of programming languages [9] –, none of those formalisms features a dichotomy between don’t-care and don’t-know nondeterminism. Therefore, our approach is unique in mapping this dichotomy into the framework of linear logic. This paper is structured as follows: The following section will provide introductions to to the intuitionistic segment of linear logic as well as to the CHR∨ programming language. In Sect. 3, we recapitulate at first our linear logic semantics for the segment of pure CHR. Then we develop and define the extension of our semantics to CHR∨ . In Sect. 4 we present two theorems concerning the soundness and completeness of our semantics w.r.t. the abstract operational semantics of CHR∨ . In Sect. 5 we will give an example for the application of our semantics. In Sect. 6 we compare our approach to related works and discuss our conclusions.

2

Preliminaries

In this section we will provide a short introduction to the concepts of both intuitionistic linear logic and the CHR∨ programming language as far as relevant to this paper.

2.1

Intuitionistic Linear Logic

Linear logic, introduced by Girard in 1987 [14], features a fine distinction between internal and external choice and a faithful embedding of classical logic into linear logic. In this paper, we will limit our focus to a commonly used segment of linear logic, called intuitionistic linear logic (ILL). The syntax of intuitionistic linear logic is given in Fig. 1. The tokens of (intuitionistic) linear logic are generally considered to represent resources rather than propositions. This terminology reflects the 1

Namely programs featuring destructive update and/or purposeful non-confluence

fact that these tokens may be consumed during the process of reasoning in linear logic, as well as its awareness of multiplicities. Concretely, the atomic formula (A) is not equivalent to the formula (A ⊗ A), pronounced ”A times A” (or ”both A and A” in Wadler’s terminology [19]). Rather, the former represents one instance, the latter two instances of a certain resource A. The ” ⊗ ” (”times”) conjunction (multiplicative conjunction) is nevertheless close to the intuitive notion that we have of classical conjunction: For any two resources A and B, the conjunction A ⊗ B denotes the resource that is available iff both A and B are available. Linear implication ”(” (”lollipop”) is set apart from classical implication by the fact that the preconditions of a linear implication are consumed during the process of reasoning. A formula of the form A ( B (Wadler: ”consume A yielding B”), expresses the fact that we can substitute one instance of a resource A (that we dispose of) for one instance of a resource B. Note that the formula (A ( B) is a resource itself, and as such is also used up when applied. The ”!” (”bang”) modality marks stable facts or unlimited resources. Arguably, the most important property of a banged resource is that it is not consumed in the process of reasoning. Thus, the formula A ⊗ !(A ( B) implies e.g. B ⊗ !(A ( B). Furthermore, multiple instances of banged resources are idempotent and we can apply weakening to them. The constant ”1” represents the empty resource and it is consequently the neutral element with respect to the ” ⊗ ” connective. In our context, we will use the turnstile symbol ”`” as an abbreviation for ”`ILL ”, which denotes deducability w.r.t. the sequent calculus of intuitionistic linear logic as defined in [14]. Example 1. We model the fact that one cup of coffee is two euros as !(E ⊗ E ( C). A ”bottomless cup” is an offer where an unlimited number of refills is included. If we assume that the exact number of refills is completely arbitrary and it also includes the possibility of not getting coffee at all, we can model it as !(E ⊗ E ( !C). Finally, we can model the fact that sugar is free as !(1 ( S). Together, the latter two formulae imply e.g. that it is possible to get two cups of coffee with sugar for two euros: !(E ⊗ E ( !C), !(1 ( S) ` (E ⊗ E ( C ⊗ C ⊗ S ⊗ S) Another important aspect of linear logic is its distinction between internal and external choice, i.e. between decisions that can be made during the process of reasoning and decisions of undetermined result (e.g. those enforced by the environment).

L ::= p(t¯) | L ( L | L ⊗ L | L&L | L ⊕ L | !L | ∃x.L | ∀x.L | > | 1 | 0 Fig. 1. The syntax of intuitionistic linear logic

In classical logic, this distinction is associated with the duality of classical conjunction and disjunction. Linear logic offers two dedicated connectives that explicitly express modes of choice: The ”&” (”with”) conjunction (additive conjunction) expresses internal choice. E.g. the formula A&B (Wadler: ”either A or B ”) does imply A or B, both conclusions are correct. However, it does not imply A ⊗ B. The ”⊕” disjunction expresses external choice, i.e. a formula of the form A ⊕ B in itself neither implies A alone or B alone. In this respect, it is analogous to classical disjunction. The constant ”>” (”top”) is the resource that all other resources can be mapped to, i.e. for every A, (A ( >) is a tautology. This property makes ”>” the neutral element with respect to the ”&” conjunction: The constant ”0” is reasonably close to falsity in classical logic: It represents failure and it is the resource which yields every other resource. It is the neutral element with respect to ”⊕”. Example 2. On any given day, it could be either sunny or raining. As we have no influence on the weather, we model this as an external choice: (S ⊕ R). On a rainy day, our local caf´e has only seats on the inside: !(R ( I). On a sunny day, we have the (internal) choice, to sit either on the inside or on the outside: !(S ( I&O). This implies that on any given day, it is at least possible to sit on the inside of the caf´e: !(R ( I), !(S ( I&O) ` (S ⊕ R) ( I We can extend intuitionistic linear logic into a first-order system with the quantifiers ”∃” and ”∀”. The resulting first-order system allows for a faithful embedding of intuitionistic logic. This is widely considered one of the most important features of linear logic. Figure 2 presents Girard’s Translation[14] of intuitionistic logic into intuitionistic linear logic.

p(t¯)G (A ∧ B)G (A → B)G (A ∨ B)G (>)G (⊥)G (¬A)G (∀x.A)G (∃x.A)G

::= ::= ::= ::= ::= ::= ::= ::= ::=

p(t¯) AG &B G (!AG ) ( B G (!AG ) ⊕ (!B G ) > 0 !AG ( 0 ∀x.(AG ) ∃x.!(AG )

Fig. 2. Translation from intuitionistic logic into linear logic

The operator G represents translation from intuitionistic into linear logic. p(t¯) stands for an atomic proposition. Girard`has proven´ in [14] that an intuitionistic sequent (Γ `LJ α) is provable iff !Γ G ` αG is provable in linear logic (where `LJ stands for deducibility w.r.t. Gentzen’s system LJ for intuitionistic logic).

2.2

Constraint Handling Rules With Disjunction

Constraint Handling Rules (CHR)[10–13] is a concurrent programming language developed in the 1990s by Fr¨ uhwirth and originally intended as a portable language extension for the implementation of constraint solvers, which has also come into use as a stand-alone general purpose concurrent programming language. In 1999, Abdennadher proposed CHR with Disjunction (CHR∨ )[2], which extends CHR with the possibility to include disjunctions in the rule bodies. This allows for backtracking search and reasoning techniques like abduction in CHR programs. In the following, we will introduce CHR∨ as a self-contained language.

The Syntax of CHR∨ We distinguish two disjoint sets of atomic constraints, which we call built-in constraints and CHR constraints. While the behavior of CHR constraints is determined by a CHR∨ program, we assume that reasoning over the built-in constraints is performed by a predefined (classic) constraint handler. The set of built-in constraints . contains at least the constraints true, false and = for syntactic equality. A built-in constraint is either an atomic formula of intuitionistic logic or a disjunction thereof. A CHR constraint is a non-empty multiset of atomic formulae. This distinction in the treatment of CHR and built-in constraints emphasizes that a set semantics applies to built-in constraints whereas a multiset semantics applies to CHR constraints. A goal is a multiset, the elements of which are either built-in constraints or atomic CHR constraints or disjunctions of goals. Programs in CHR∨ consist of two sorts of guarded rules. Simplify rules express the conditional substitution of a CHR constraint with a certain goal. Propagate rules express the addition of a goal under certain conditions without removing anything. The syntax of Simplify and Propagate rules is (E ⇔ C|G) and (E ⇒ C|G), respectively. The rule head E is a CHR constraint, the guard C is a built-in constraint and the body G is a goal. If the guard equals true, it can be omitted. The syntax of CHR∨ is given in figure 3.

Built-in constraint: CHR constraint: Goal: Simplify rule: CHR program:

C,D E,F G,H R P

::= ::= ::= ::= ::=

{>} | {⊥} | c (t¯) | C ∧ D {e (t¯)} | E ∪ F {C} | E | G ∪ H | {G ∨ H} (E ⇔ C | G) R1 , . . . , Rn n≥0

Fig. 3. The syntax of CHR∨

CHR states A CHR state is of the form hG; Ci, where G is a goal and C is a built-in constraint. G is called goal store and C is called constraint

store. Of the two, only the goal store can be arbitrarily manipulated by a CHR program. The constraint store is handled by a predefined constraint handler according to a constraint theory CT. Information can only be added to the constraint store but not removed. In a pure CHR program, there is exactly one CHR state at each moment of its execution. In CHR∨ , this concept is extended to a disjunction of CHR states. A configuration is a disjunction of CHR states of the form S1 ∨ S2 ∨ · · · ∨ Sn , where S1 , S2 , . . . Sn are CHR states. Each CHR state within a configuration represents an independent branch of a search tree. On execution, a CHR∨ program is given a single state of the form hG; truei, i.e. with an empty constraint store. A state of this form is called an initial state. Our notation for CHR states is summarized in Fig. 4. Note that to simplify notation, we allow a disjunction to be empty (ε). Such an empty disjunction is semantically equivalent to a state hf alse; f alsei.

CHR state: S ::= hG; Ci Initial state: S0 ::= hG; >i` ´ ¯ T¯ ::= ε | S | S ∨ S¯ Disjunction of states: S, Fig. 4. Notation for CHR∨ states

Operational Semantics Even for pure CHR, there are actually several variants of the operational semantics. These variants carry over to CHR∨ , so we have to decide which operational semantics to consider. The original operational semantics for CHR was published in [10] and is known as the abstract semantics of CHR. It is the original and most general operational semantics in that every derivation possible in any of the other semantics is also true in the abstract semantics. In [1], Abdennadher extended the abstract semantics with a token store for propagation rules in order to avoid trivial non-termination. This was extended to the so-called refined semantics of CHR [8], which is closest to the execution strategy used in most implementations of CHR. Examples for other relevant operational semantics include especially the semantics of Probabilistic CHR [12], in which each applicable rule has a (weighted) chance of firing. As our operational semantics is meant as a framework to reason theoretically over CHR∨ programs, it appears a matter of course that we chose the abstract semantics as the most general of kind for our considerations. The transition rules that constitute the operational semantics of CHR are summarized in Fig. 5. Arguably the most important transition rule, Simplify performs a conditional substitution of a CHR constraint in the goal store with a different one.

A CHR∨ rule (F ⇔ D | H) is applicable if the rule head F matches a CHR constraint E in the goal store2 . Furthermore, the constraint store C of the current program state must imply the rule guard D under the constraint theory CT. If Simplify is applied, the constraint E in goal store is substituted by . F, and the variable matching (E = F ) as well as the guard D are added to the constraint store. Propagation differs from simplification in that the matched CHR constraints E are kept in the constraint store rather than being removed. In practical terms, this raises the question of how to avoid trivial nontermination, which has been addressed in [1] and [8]. With respect to the abstract semantics, however, Propagate rules can be faithfully reduced to Simplify rules by adding a copy of the rule head into the rule body in the source code. Hence, we will consider propagation a special case of simplification in this paper. The Solve transition moves a built-in constraint from the goal store to the constraint store. Optionally, the built-in constraint solver can also apply some simplification to the constraint store. However, this is irrelevant for the declarative semantics and shall be ignored in this paper. If at least one goal in the goal store contains a disjunction, the Split rule is applicable. On splitting, the current state is split into a disjunction of two states, in each of which the goal with the disjunction is substituted with one of its disjoint subgoals.

Simplify If and then Solve If then Split

(F ⇔ D|H) is a fresh variant of a rule in P with variables x ¯ . CT |= ∀(C → ∃¯ x(F = E ∧ D)) . S¯ ∨ hE ∪ G; Ci ∨ T¯ 7→ S¯ ∨ hH ∪ G; (F = E) ∧ D ∧ Ci ∨ T¯ CT |= (C ∧ D1 ) ↔ D2 S¯ ∨ h{C} ∪ G; D1 i ∨ T¯ 7→ S¯ ∨ hG; D2 i ∨ T¯ S¯ ∨ hG ∪ {G1 ∨ G2 }, Ci ∨ T¯ 7→ S¯ ∨ hG ∪ G1 , Ci ∨ hG ∪ G2 , Ci ∨ T¯ Fig. 5. CHR∨ transition rules

3

A Linear Logic Semantics for CHR∨

In this section, we will at first recapitulate the previously published linear logic semantics for the segment of pure CHR[6] and then discuss how to extend this result to the full segment of CHR∨ . 2

Note that CHR does not use unification like Prolog but one-sided matching, i.e. the variables in the rule head have to be matched with those in the store, not vice versa.

3.1

The Linear Logic Semantics for pure CHR

Our semantics is based on the observation that the Simplify transition of CHR behaves similarly to the modus ponens of linear logic. Both mechanisms can be characterized as the rewriting of one or several logical predicates in a context that behaves as a multiset. Hence, we translate Simplify rules to linear implications and, consequently, CHR states to multiplicative conjunctions of linear predicates. In the general case – and unlike linear implication – Simplify rules are guarded. We can straightforwardly translate the guard into another precondition of the corresponding linear implication. We must make sure, however, that the predicates that correspond with the built-in constraints are not ”consumed” during the process. Therefore, we introduce the convention that built-in constraints appear only banged throughout our semantics. As a side-effect, this helps to distinguish built-in constraints from CHR constraints. The built-in constants true and false represent the empty constraint and failure, which is why we map them to the linear constants ”1” and ”0”, respectively. A couple of adaptations have to be made to the underlying constraint theory CT according to which the built-in constraints are handled. As Girard showed in [14], it is possible to faithfully translate intuitionistic logic into intuitionistic linear logic (ILL). If we require that the theory CT be a theory of intuitionistic logic, we can thus translate it to ILL. Secondly, we have to make sure that the equality constraint is handled correctly. For this purpose, we add a number of formulae to the constraint theory. Concretely, for every n-ary CHR constraint E, we add the the following conjunction of formulae to the constraint theory: n O

` ´ . !∀. E(t1 , ..., tj , ..., tn ) ⊗ !(tj = t0j ) ( E(t1 , ..., t0j , ..., tn )

j=1

E.g. for a CHR constraint edge/2, we would add the following formula to the constraint theory: ` ´ . !∀. edge(t1 , t2 ) ⊗ !(t1 = t01 ) ( edge(t01 , t2 ) ⊗ ` ´ . !∀. edge(t1 , t2 ) ⊗ !(t2 = t02 ) ( edge(t1 , t02 ) This translated version of the constraint theory CT, including the formulae for equality, will be called linear constraint theory LCT. For every intuitionistic formula CT `LJ ϕ, we have !LCT ` ϕG where G represents translation from intuitionistic logic to linear logic according to [14].

3.2

Extending the Semantics

In this section we will explain the basic idea of our proposed semantics. We will first show how don’t-care nondeterminism is already represented in the linear logic semantics of pure CHR. Then we will discuss how we can extend this semantics for CHR∨ .

In the linear logic semantics for pure CHR, the ` program is mapped ´to a ”⊗” conjunction of formulae of the form !∀ !DiL ( FiL ( ∃¯ xi .HiL . This translation logically implies internal choice whenever more than one rule is applicable. We consider the following program P1 : F ⇔ D | H1 F ⇔ D | H2 The logical reading of this program is: “ ” “ ” P1L = !∀ !DL ( F L ( ∃¯ x1 .H1L ⊗ !∀ !DL ( F L ( ∃¯ x2 .H2L This is logically equivalent to: “ ” !∀ !DL ⊗ F L ( (∃¯ x1 .H1L )&(∃¯ x2 .H2L ) Operationally, the case of several applicable rules is handled by committed choice, i.e. as don’t-care nondeterminism. Thus, the linear logic semantics of CHR contains an implicit mapping of don’t-care nondeterminism to internal choice. The syntax of CHR∨ differs from the syntax of pure CHR only in the presence of disjunctions in goals. Operationally, disjunction is evaluated as don’t-know nondeterminism, i.e. when one of several options is chosen, the other options will not be discarded. Analogously to Prolog, don’tknow nondeterminism is usually implemented as backtracking. We will consider another example program P2 , with disjunction: H ⇔ D | F1 ∨ F2 We consider the derivation that results from the same program state that we used in our previous example: hH; Di 7→Simplify hF1 ∨ F2 ; Di 7→Split hF1 ; Di ∨ hF2 ; Di We notice that there is no choice between hF1 ; Di and hF2 ; Di here. Instead, there is a disjunction hF1 ; Di ∨ hF2 ; Di and there is no way to remove any of the two. By contrast, in our previous example program P1 , a state hH; Di might be followed by either hF1 ; Di or hF2 ; Di. This is reflected in its logical reading: Both sequents P1L ` hH; DiL ( hF1 ; DiL and P1L ` hH; DiL ( hF2 ; DiL are provable. The logical reading of program P2 should reflect the fact that we can remove neither hF1 ; Di nor hF2 ; Di. However, it also has to reflect the fact that hF1 ; Di and hF2 ; Di are processed independently from each other. Therefore, disjunctions of states and disjunctions in goals will be translated to ”⊕” disjunctions. Under this semantics, example program P2 translates as follows: “ ” P2L = !∀ !DL ⊗ H L ( (∃¯ x1 .F1L ) ⊕ (∃¯ x2 .F2L ) The translation of our disjunction of states will likewise be an ”⊕” disjunction: hF1 ; Di ⊕ hF2 ; Di

This translation satisfies all of our conditions and the sequent P2L ` hH; DiL ( hF1 ; DiL ⊕ hF2 ; DiL is provable. Failed states, i.e. states of the form hG; f alsei, translate to ”0”, which is the neutral element w.r.t. ”⊕”. Thus we have also modelled the removal of failed states from the current disjunction of states.

3.3

Definition of the Extended Semantics

Figure 6 presents our linear logic semantics for CHR∨ . The operator L represents translation into linear logic.

Built-in constraints:

CHR constraints: Goals:

Initial states: Derived states: Parallel states: Simplify rules: Propagate rules: Programs:

>L ⊥L c(t¯)L (C ∧ D)L {e(t¯)}L (E ∪ F )L {C}L (G ∪ H)L (G ∨ H)L S0L = hG; trueiL SaL = hG; CiL (S ∨ T )L (E ⇔ C | G)L (E ⇒ C | G)L (R1 ...Rm )L

::= ::= ::= ::= ::= ::= ::= ::= ::= ::= ::= ::= ::= ::= ::=

1 0 !c(t¯) C ⊗D e(t¯) EL ⊗ F L CL GL ⊗ H L GL ⊕ H L GL ∃¯ xa (GL ⊗ C L ) S L` ⊕ T L ´ !∀ `!C L ⊗ E L ( ∃¯ y .GL ´ L L L !∀ !C ⊗ E ( E ⊗ ∃¯ y .GL L L R1 ⊗ ... ⊗ Rm

Fig. 6. Linear-logic declarative semantics

Both types of constraints are mapped to ”⊗” conjunctions of atomic constraints, atomic built-in constraints are banged. Goals containing disjunctions and disjunctions of states are mapped to ”⊕” disjunctions. Initial states are translated as goals, whereas for a non-initial state Sa , the local variables x ¯a of Sa – i.e. the variables that do not appear in the corresponding initial state S0 – are existentially quantified. CHR∨ rules are mapped to linear implications with the translations of head and guard on the condition side and that of the body as consequence. The local variables y¯ of the rule body are existentially quantified. The bang before the translation of the guard is actually redundant but kept for formal clarity. A CHR∨ program is translated into a ”⊗” conjunction of the translation of its rules.

4

Soundness and Completeness

Concerning the soundness and completeness of our declarative semantics w.r.t. the operational semantics, the following theorems hold.

Theorem 1 (Soundness). If S0 is an initial CHR state and S¯n is a disjunction of states, which is derivable from S0 under a program P and a linear constraint theory LCT. Then the following holds: “ ” !LCT, P L ` ∀ S0L ( S¯nL Theorem 2 (Completeness). If S0 is an initial CHR state and S¯n is a disjunction of states, such that “ ” !LCT, P L ` ∀ S0L ( S¯nL then there is a disjunction of states S¯ν with a finite derivation S0 7→∗ S¯ν such that !LCT ` S¯νL ( S¯nL Theorem 2 states that if an initial CHR∨ state Si logically implies a disjunction S¯n of CHR∨ states then it is operationally possible to reach a disjunction of states S¯ν that implies S¯n under the constraint theory CT. The full proofs for Theorem 1 and 2 can be found in Appendix B and C.

Proof Sketch While Theorem 1 can be reduced to the proof of the soundness theorem for the linear logic semantics for pure CHR, the proof of Theorem 2 is more challenging. The ` ´ first major point to show is that the sequent !LCT, P L ` ∀ S0L ( S¯nL can be transformed to a logically equivalent sequent which we call restricted. In that restricted sequent, every logical reading (!ρ) of a CHR∨ rule is substituted by a finite number of formulae of the form (1&ρ). This means that the number of CHR∨ rule applications is not strictly determined, but limited. (Strict determination is not always possible.) We prove this by defining a set of transformation rules that transform a formal proof of our original sequent into a formal proof of the restricted sequent. We then show that we can apply CHR∨ transformations to the state S0 , which corresponds to S0L , such that the logical reading of the resulting state also implies S¯nL . This is easy to show for Solve and Split. For the case that none of those are applicable, we show that either a formula (1&ρ) in the transformed sequent corresponds to an applicable CHR∨ rule, or we have already found the state S¯ν . We do this by induction over the sequents of a formal proof. The finite number of sub-formulae of the form (1&ρ) implies that we can derive S¯ν with a finite derivation.

Significance of Theorem 2: The linear constraint theory LCT determines the handling of the built-in constraints only, it does not have an effect on the actual CHR constraints. Consequently, for every CHR state hEν ; Cν i in S¯ν , there is a CHR state hEn ; Cn i in S¯n such that Eν and En differ only in the built-in constraints. Another notable point is that the derivation S0 7→∗ S¯ν is explicitly finite. Furthermore, our semantics implicitly defines an interesting segment of intuitionistic linear logic, consisting of all sequents of the form

` ´ !LCT, P L ` ∀ S0L ( S¯nL . For any such sequent, it is a necessary condition for its provability that a finite number of modus ponens applications – mimicking Simplify transitions – can simplify it to a sequent of the form !LCT ` S¯νL ( S¯nL where the proof of the latter sequent can be reduced to a proof in classical intuitionistic logic. Furthermore, if the ¯L modus ponens applications and a proof for !LCT ` S¯νL ( ` Sn are known, ´ these can be used to construct a proof for !LCT, P L ` ∀ S0L ( S¯nL . We suppose that these findings will enable us to make proof search in our specific segment of linear logic significantly more efficient.

5

Example

This example will show how we can apply our linear logic semantics to reason about CHR∨ programs that integrate several programming paradigms. The following classic Prolog program implements a ternary append predicate for lists, representing the fact that the third argument is a concatenation of the first two. . . . append(X,Y,Z) ← X=[], Y=L, Z=L. . . . append(X,Y,Z) ← X=[H|L1], Y=L2, Z=[H|L3],append(L1,L2,L3). We can embed this program faithfully into CHR∨ by explicitly stating the don’t-know nondeterminism using the ∨ operator. app1@ append(X,Y,Z) ⇔ . . . ( X=[], Y=L, Z=L . . . ∨ X=[H|L1], Y=L2, Z=[H|L3], append(L1,L2,L3) ). The linear logic reading of this program looks as follows: !∀X, Y, Z. (append(X, Y, Z) ( ∃L, L1, L2, L3, H. . . . (!(X = []) ⊗ !(Y = L) ⊗ !(Z = L)) ⊕ . . . (!(X = [H|L1]) ⊗ !(Y = L2) ⊗ !(Z = [H|L3]) ⊗ append(L1, L2, L3))) Now we want to add a rule that should not change the overall semantics of the program but increase efficiency by intercepting the worst case, where the second argument is an empty list. Note that the introduction of this rule adds don’t-care nondeterminism to the program. The rule looks as follows: . . app2@ append(X,Y,Z) ⇔ Y=[] | X=Z. The rule app2 corresponds to the following logical reading: . . !∀X, Y, Z. (!(Y = []) ⊗ append(X, Y, Z) ( X = Z) By induction over the length of the list X – i.e. the second argument of append(X,Y,Z) – we can show that the logical reading of app2 is entailed by the logical reading of app1. Hence, the program consisting of rule app1 only and the program consisting of both app1 and app2 are operationally equivalent.

6 6.1

Discussion Related Work

Common linear logic languages such as LO[4], Lygon[15] and Lolli[17] rely on a backchaining operationalization of linear logic. Thus, they are not directly comparably to our linear logic semantics of an existing forward-chaining programming language. The most closely related approaches to this work are therefore Miller’s linear logic semantics for the π-calculus [18] and the linear logic semantics for the CC/LCC class of programming languages by Fages, Ruet and Soliman [9]. In Miller’s approach, it is don’t-care – not don’t-know – nondeterminism that is being mapped to linear additive disjunction ”⊕”. Fages et al are closer to our approach in that they explicitly map don’tcare nondeterminism to additive conjunction (which we do implicitly). However, neither the π-calculus nor the CC/LCC class of languages feature the dichotomy between don’t-know and don’t-care nondeterminism that CHRv does. If not directly related to our work, we find it worth to mention two approaches to a logical characterization of the dichotomy of forward and backward chaining: In [16], Harland, Pym, and Winikoff incorporate forward reasoning features directly into the sequent calculus of linear logic. In [7], Chaudhuri, Pfenning, and Price improved the focusing inverse method for linear logic such that it generalizes both forward-chaining and backward-chaining proof search. These works are loosely related to our work because the don’t-know nondeterminism of CHR∨ can be used to embed SLD resolution, which is inherently a backward chaining approach.

6.2

Conclusion

We have presented a linear logic semantics for CHR∨ . The core of our result is the mapping of the dualism between don’t-care and don’t-know nondeterminism in CHR∨ to the dualism of internal and external choice in linear logic. Furthermore, we use Girard’s translation to embed the constraint theory CT that is handled by the built-in constraint handlers. This semantics provides us with a powerful formalism to reason about CHR∨ programs. As CHR∨ is considered to be a formalism to experiment with logical programming paradigms, we expect to be able to apply our result to other programming languages and paradigms that mix classical constraint handling, forward chaining and backward chaining. The applicability of our proposed semantics and its model checking capabilities offers a promising field for future research. Another aspect for future work would would be the design of an algorithm to efficiently find linear logic proofs in the segment of linear logic that is defined by our declarative semantics. Obviously, this segment is much smaller than the full segment of intuitionistic linear logic and therefore the efficiency of proof search might be increased significantly. Furthermore, our completeness theorem and its proof seem to suggest approaches on how to limit the search space significantly.

Acknowledgements We are grateful to the reviewers of an earlier version of this paper for their helpful remarks. Hariolf Betz has been funded by an LGFG grant.

References 1. S. Abdennadher: Operational Semantics and Confluence of Constraint Handling Rules. Proceedings of the 3rd International Conference on Principles and Practice of Constraint Programming (CP 1997), Austria, October 1997. 2. S. Abdennadher, H. Sch¨ utz: CHRv: A Flexible Query Language. International conference on Flexible Query Answering Systems, FQAS’98, Springer LNCS, Roskilde, Denmark, May 1998 3. S. Abdennadher: A Language for Experimenting with Declarative Paradigms. Second Workshop on Rule-Based Constraint Reasoning and Programming, Singapore, 2000. 4. J. Andreoli and R. Pareschi: LO and Behold! Concurrent Structured Processes. ACM SIGPLAN Notices, Proceedings OOPSLA/ECOOP ’90, 25(10):44-56, 1990. 5. H. Betz: A Linear Logic Semantics for CHR. Master Thesis, University of Ulm, October 2004, [www.informatik.uniulm.de/pm/mitarbeiter/ fruehwirth/other/betzdipl.ps.gz]. 6. H. Betz, T. Fr¨ uhwirth: A Linear-Logic Semantics For Constraint Handling Rules. Proceedings of CP 2005, 137-151, Springer, 2005. 7. K. Chaudhuri, F. Pfenning, G. Price: A logical characterization of forward and backward chaining in the inverse method. Proceedings of IJCAR’06, pp. 97-111, Seattle, Springer LNCS 4130, August 2006. 8. G. J. Duck, P. J. Stuckey, M. G. de la Banda, and C. Holzbaur: The Refined Operational Semantics of Constraint Handling Rules. Proceedings of the 20th International Conference on Logic Programming, 2004. 9. F. Fages, P. Ruet, S. Soliman: Linear concurrent constraint programming: linear and phase semantics. Information and Computation, 165(1):14-41, 2001. 10. T. Fr¨ uhwirth: Constraint Handling Rules. Constraint Programming: Basics and Trends, Springer LNCS 910, 1995. 11. T. Fr¨ uhwirth: Theory and Practice of Constraint Handling Rules, Journal of Logic Programming, 37(1-3):95-138, 1998. 12. T. Fr¨ uhwirth, A. Di Pierro, and H. Wiklicky: Probabilistic Constraint Handling Rules, 11th International Workshop on Functional and (Constraint) Logic Programming (WFLP 2002), 2002. 13. T. Fr¨ uhwirth, S. Abdennadher: Essentials of Constraint Programming, Springer, 2003. 14. J.-Y. Girard: Linear Logic: Its syntax and semantics. Theoretical Computer Science, 50:1-102, 1987. 15. J. Harland, D. Pym, and M. Winikoff: Programming in Lygon: An Overview. Springer LNCS 1101:391-405, Munich, 1996.

16. J. Harland, D. Pym, and M. Winikoff: Forward and Backward Chaining in Linear Logic. Proceedings of the CADE-17 Workhop on Proof-Search in Type-Theoretic Systems, Pittsburgh, 2000. 17. J. Hodas, D. Miller: Logic Programming in a Fragment of Intuitionistic Linear Logic. Proceedings 6th IEEE Annual Symposium on Logic in Computer Science, IEEE Computer Society Press, New York, 1991. 18. D. Miller: The π-calculus as a theory in linear logic: Preliminary results. ELP 1992: 242-264, Bologna, Italy, 1992. 19. P. Wadler: A Taste of Linear Logic. Proceedings of the 18th International Symposium on Mathematical Foundations of Computer Science, Gd´ ansk, 1993.

A Sequent Calculus Representation of Linear Logic Figure 7 lists the rules of inference that constitute the sequent calculus of intuitionistic linear logic. ”`” expresses implication as deductible by this calculus. We call the formulae on the left-hand side of the ”`” antecedent and those on the right-hand side consequent. Each rule of inference has one or more preconditional sequents and exactly one postconditional sequent.

α`α

(Id)

Γ, β, α, ∆ ` γ (Exchange) Γ, α, β, ∆ ` γ

Γ ` α α, ∆ ` β (Cut) Γ, ∆ ` β

Γ `α ∆`β Γ, α, β, ` γ (R ⊗ ) (L ⊗ ) Γ, ∆ ` α ⊗ β Γ, α ⊗ β ` γ Γ, α ` β Γ ` α β, ∆ ` γ (R () (L () Γ `α(β Γ, α ( β, ∆ ` γ Γ `α Γ `β Γ, α ` γ Γ, β ` γ (R&) (L&1 ) (L&2 ) Γ ` α&β Γ, α&β ` γ Γ, α&β ` γ Γ `β Γ, α ` γ Γ, β ` γ Γ `α (R⊕1 ) (R⊕2 ) (L⊕) Γ `α⊕β Γ `α⊕β Γ, α ⊕ β ` γ Γ, α ` β !Γ ` α (R!) (Dereliction) !Γ `!α Γ, !α ` β Γ, !α, !α ` β Γ `β (Contraction) (W eakening) Γ, !α ` β Γ, !α ` β Γ ` β[a/x] Γ, α[t/x] ` β (L∀) (R∀) Γ, ∀xα ` β Γ ` ∀xβ Γ, α[a/x] ` β (L∃) Γ, ∃xα ` β Γ `∆ (R1) (L1) `1 Γ, 1 ` ∆

Γ ` β[t/x] (R∃) Γ ` ∃xβ Γ `>

(R>)

0`α

(L0)

Fig. 7. The Sequent Calculus of Linear Logic

B

Proof of Soundness

The proof of soundness for the proposed semantics will largely consist in a reduction to the corresponding theorem for pure CHR, which has been proven in [5]. It reads as follows:

Theorem 3 (Soundness). Let P be a CHR program and S0 be an initial state. If S0 has a derivation with a final state Sn , then P L , !LCT ` ∀(S0L ( SnL ). Note that in this case the operator L refers to the linear logic semantics of pure CHR as presented in [6]. However, that semantics differs from the linear logic semantics for CHR∨ only in that there is no translation for disjunctions in goals. Operationally, the main difference between CHR and CHR∨ is the presence of the Split rule in CHR∨ which does not have an equivalent in pure CHR. The other rules of inference differ only marginally. We consider w.l.o.g. the Solve rule of CHR∨ : Solve If then

CT |= (C ∧ D1 ) ↔ D2 S¯ ∨ h{C} ∪ G; D1 i ∨ T¯ 7→ S¯ ∨ hG; D2 i ∨ T¯

This rule differs from the corresponding rule of pure CHR only in the presence of S¯ and T¯. Now we assume that for any two pure CHR states h{C} ∪ G; D1 i and hG; D2 i such that h{C} ∪ G; D1 i 7→Solve hG; D2 i it is true that !LCT, P L ` h{C} ∪ G; D1 iL ( hG; D2 iL Assuming this, we can prove by means of the sequent calculus that “ ” “ ” !LCT, P L ` S¯L ⊕ h{C} ∪ G; D1 iL ⊕ T¯L ( S¯L ⊕ hG; D2 iL ⊕ T¯L The latter expression implies that the linear logic semantics for CHR∨ is correct w.r.t. the Solve rule. The Simplify rule (and the Propagate rule) can be handled analogously. Therefore we conclude that Theorem 1 holds if both of the following are true: (a) Theorem 3 holds (b) If Sn is a CHR∨ state and S¯n+1 is a disjunction of CHR∨ states such that Sn 7→Split S¯n+1 , then the following holds: “ ” L !LCT, P L ` ∀ SnL ( S¯n+1 As (a) has been proven in [6], we only have to show that (b) holds. From the operational semantics follows that Sn and S¯n+1 are of the following forms: Sn = hG ∪ {G1 ∨ G2 }, Ci S¯n+1 = hG ∪ G1 , Ci ∨ hG ∪ G2 , Ci Consequently, their logical readings are as follows:

L S¯n+1

“ ” L SnL = GL ⊗ GL ⊗ CL 1 ⊕ G2 “ ” “ ” L L = GL ⊗ GL ⊕ GL ⊗ GL 1 ⊗C 1 ⊗C

A simple sequent calculus derivation reveals that L ` SnL ( S¯n+1

is a tautology. This implies the correctness of (b).

C C.1

Proof of Completeness General Definitions and Properties

Sub-formulae Definition 1 (Sub-formula). Let ϕ, ψ be formulae of intuitionistic linear logic. Then we say that ϕ is a sub-formula of ψ iff one of the following holds: 1. ψ = ϕ 2. ψ is one of !α, ∀x(α) or ∃x(α), and ϕ is a sub-formula of α. 3. ψ is one of (α ⊗ β), (α, β), (α ⊕ β) or (α & β) and ϕ is a sub-formula of either α or β Lemma 1 (Preservation of Sub-formulae). Let ϕ1 , ...ϕn−1 ` ϕn be a sequent and let π be a cut-free proof of ϕ1 , ...ϕn−1 ` ϕn . If α1 , α2 , ...αm−1 ` αm is a sequent appearing in π, then for every αj there is an i ∈ {1, ..., n} such that αj is a sub-formula of ϕi .

Proof of Lemma 1 We can easily verify that Lemma 1 holds for any single rule of inference except (Cut). We can hence show by induction that it holds for every cut-free proof π.  Lemma 2 (Position of Sub-formulae in Derived Sequents). If Γ ` α is a sequent appearing in a cut-free proof π of a sequent Γ0 ` α0 , then α is either a sub-formula of α0 (i) or Γ0 has a sub-formula (ϕ ( ψ) such that α is a sub-formula of ϕ (ii).

Proof of Lemma 2 We can easily verify that proposition (i) holds for any single rule of inference except (Cut) and (L(). In the case of (L() proposition (ii) holds. We prove Lemma 2 by induction.

Sets and Multisets As we will make extensive use of multisets, the concept shall be formally defined here, as well as the operators we intend to use on those. Definition 2 (Multiset). A multiset is a tuple (A,m), where A is a set and m is a function m : A → N. In practice, we will usually write (A, m) simply as A.

Definition 3 (Multiset Operators). Let (A,m) and (B,n) be multisets. The following operators are defined: (A, m) ∪ (B, n) = (A ∪ B, f ) where f (a) = max(m(a), n(a)) (A, m) ∩ (B, n) = (A ∩ B, f ) where f (a) = min(m(a), n(a)) (A, m) + (B, n) = (A ∪ B, f ) where f (a) = m(a) + n(a) (A, m) · (B, n) = (A ∩ B, f ) where f (a) = m(a) · n(a) (A, m)−(B, n) = {a ∈ A | m(a) > n ˜ (a)} n(a)  , f ) where f (a) = m(a)−˜ n(a) if a ∈ B and n ˜ (a) : A ∪ B → N with n ˜ (a) = 0 otherwise Definition 4 (⊗ ; !⊗ ). We use the operators ⊗ and !⊗ to convert multisets of logical formulae into conjunctions thereof. O Σ⊗ = σ σ∈Σ

Σ !⊗ =

O



σ∈Σ

The following set definitions will hold throughout the rest of this section. We use B to denote the set of built-in constraints, C to denote the set of CHR∨ constraints. Furthermore, R will denote the set of all logical readings of CHR∨ rules, yet without the leading bang: n o R = ρ | (!ρ) = (F ⇔ D | H)L where (F ⇔ D | H) is a CHR∨ rule

C.2

Restriction of Banged Resources

We will simplify matters by defining an abbreviated notation for conjunctions of a specific type of formula. Definition 5 (≤ ). Let Φ be a set of formulae of intuitionistic linear logic and n be a natural number. Then ϕ≤n is defined for every ϕ ∈ Φ as follows: ϕ≤0 = 1 ϕ≤(n+1) = (1&ϕ) ⊗ ϕ≤n We also extend the ≤ operator to sets: n o Φ≤n = ϕ≤n | ϕ ∈ Φ The formula ϕ≤n thus denotes a conjunction of up to n copies of ϕ, their exact number being an internal choice. Lemma 3 (Properties of ≤ ). We can prove the following properties of ≤ : ϕ≤n = (1&ϕ) ⊗ (1&ϕ) ⊗ · · · ⊗ (1&ϕ) | {z } exactly n times

ϕ

≤n

` ϕ ⊗ ϕ ⊗ ··· ⊗ ϕ | {z } up to n times

ϕ

≤m



≤n

iff m > n

Lemma 4 (Restriction of Logical Readings of CHR∨ Rules). Let Λ ` µ be a sequent of intuitionistic linear logic such that O M Λ =!LCT ⊗ !ρi ⊗ (∃¯ xj .ϕj ) (1) i

µ=

M

j

(∃¯ yk .ψk )

(2)

k

where for every i, ρi ∈ R and for each j, x ¯j , y¯j are sequences of variables such that each (∃¯ xj .ϕj ), (∃¯ xk .ψk ) is the linear logic reading of a CHR state. Then for every i, there is a natural number ni such that ! ! O ≤n M !LCT ⊗ ρi i ⊗ (∃¯ xj .ϕj ) ` µ (3) i

j

is provable iff (Λ ` µ) is provable.

Proof of Lemma 4 (’⇐’) We will prove the (’⇐’) direction of Lemma 4 by transforming a cut-free sequent calculus derivation of (Λ ` µ) into a derivation of (3). Let π be a cut-free derivation of the sequent Λ ` µ. From π we construct an alternative derivation π 0 by applying the transformations below on π wherever applicable under the condition that α ∈ B: α`α

(Identity)

Γ, α ` β (Dereliction) Γ, !α ` β

!α `!α

(Identity)

Γ, !α ` β (no rule applied) Γ, !α ` β

!Γ ` α !Γ `!α (R!) (no rule applied) !Γ `!α !Γ `!α In the resulting proof π 0 , all occurrences of built-in constraints in every sequent of the proof are banged. Furthermore, let us consider possible occurences of the (R!) rule in π: !Γ ` α (R!) !Γ `!α From the structure of (Λ ` µ) and Lemma 2 we can conclude that there is no occurrence of (R!) in π, where α ∈ / B. We conclude that there is no occurrence of (R!) in π 0 . Consequently, π 0 is a derivation of the sequent Λ ` µ in which neither (Cut) nor (R!) occur. From π 0 , we now construct a derivation π 00 in which for all ρ ∈ R, every occurrence of !ρ is substituted by a sub-formula of the form ρ≤n . We construct π 00 by applying the following transformation rules to π 0 wherever applicable under the condition that ρ ∈ R: ρ`β (Dereliction) !ρ ` β Γ `β (W eakening) Γ, !ρ ` β

ρ`β (D0 ) ρ≤1 ` β Γ `β (W 0 ) Γ, ρ≤0 ` β

Γ, ρ≤n , ϕ≤m ` β (C 0 ) Γ, ρ≤(n+m) ` β

Γ, !ρ, !ρ ` β (Contraction) Γ, !ρ ` β

Γ, ρ≤n , ∆, α ` γ Γ, ρ≤m , ∆, β ` γ (L⊕0 ) Γ, ρ≤max(n,m) , ∆, α ⊕ β ` γ

Γ, !ρ, ∆, α ` γ Γ, !ρ, ∆, β ` γ (L⊕) Γ, !ρ, ∆, α ⊕ β ` γ

Γ, !ρ, ∆ ` α Γ, !ρ, ∆ ` β Γ, ρ≤n , ∆ ` α Γ, ρ≤m , ∆ ` β (R&) (R&0 ) Γ, !ρ, ∆ ` α & β Γ, ρ≤max(n,m) , ∆ ` α & β From Lemma 3 follows that the modified rules of inference (L⊕0 ) and (R&0 ) are valid. The other rules – (D0 ), (W 0 ) and (C 0 ) – can be reduced to existing rules by applying the definition of ≤ . ρ`β (D0 ) ρ≤1 ` β

ρ`β (L&2 ) 1&ρ ` β

=

Γ `β (W 0 ) Γ, ρ≤0 ` β

=

Γ `β (L1) Γ, 1 ` β

Γ, ρ≤n , ρ≤m ` β Γ, ρ≤n , ρ≤m ` β 0 (C ) = (L⊗) Γ, ρ≤n ⊗ ρ≤m ` β Γ, ρ≤(n+m) ` β 00 0 The resulting derivation π proves a sequent Λ ` µ which differs from Λ ` µ in that for every ρ ∈ R, all occurrences of !ρ in Λ are substituted by a sub-formula of the form ρ≤n . Hence, we let ! O ≤n M 0 i Λ = !LCT ⊗ ρ ⊗ (∃¯ zj .γj ) i

j

which proves the (’⇐’) direction of the lemma.

(’⇒’) We can show by application of the sequent calculus that !ϕ ` (1&ϕ) is provable for any ϕ. Hence, we can prove the (’⇒’) direction of the proof by repeated application of the (Cut) rule. 

C.3

Completeness

Lemma 5 will allow us to state sequents describing derivations in CHR logically in a more convenient way. Lemma 5. Let A, B and C be formulae, where A does not contain free variables. The sequent A ` ∀¯ x(B ( C) is provable iff A, B ` C is provable.

Proof of Lemma 5

By means of the sequent calculus we can easily derive A ` ∀¯ x(B ( C) from A, B ` C and vice versa.  Lemma 6 (Disjunction). For arbitrary formulae α, β, γ, and δ of linear logic, the sequent α ⊗ (β ⊕ γ) ` δ is provable iff both (α ⊗ β ` δ) and (α ⊗ γ ` δ) are provable.

Proof of Lemma 6 By sequent calculus. In the following lemma we use the usual notion of substitution, which we consider to be applicable on sets and multisets as well as on formulae. Lemma 7 (Completeness of Simplify). Let π be the proof of a sequent Λ ` µ such that L L !LCT ⊗ Π ⊗ ⊗ ∃¯ xΛ .(EΛ ⊗ CΛ )

Λ=

µ=

x ¯µ .(EµL



(4)

CµL )

(5)

≤1

where Π = Π · R , x ¯Λ and x ¯µ are (possibly empty) sequences of variables, EΛ and Eµ are CHR constraints, and CΛ and Cµ are built-in constraints. Then one of the following propositions holds: (i) It is true that: L L !LCT ⊗ ∃¯ xΛ .(EΛ ⊗ CΛ )`µ

(ii) There is a formula ρ≤1 ∈ Π and a substitution σ such that: “ ” ρ = ∀ !DL ⊗ F L ( ∃¯ y .H L and L !LCT ⊗ CΛ ` DL σ

!LCT ⊗ (Π − {ρ})



and

F σ ⊆ EΛ L

⊗ ∃¯ xΛ , y¯((EΛ − F σ) ⊗

and L CΛ

⊗ DL σ) ` µ

Proposition (i) models the fact that the logical reading of a CHR state hEΛ ; CΛ i, which is contained in Γ , directly implies µ under the constraint theory CT. Proposition (ii) models the fact that Λ contains the restricted logical reading of an applicable CHR rule (F ⇔ D | H) such that the CHR state that results from its application proves µ as well.

Proof of Lemma 7 We consider a cut-free proof π of Λ ` µ in which all atomic built-in constraints are introduced banged. We showed in the proof of Lemma 4 that such a proof is constructible. We now consider all occurrences of the (Id) rule in π, where the introduced sequent is of the form x ¯.(E L ⊗ C L ) ` x ¯.(E L ⊗ C L ) where E is not empty. If π contains no such occurrences, EΛ must be empty and proposition (ii) trivially true. For the rest of this proof we will therefore assume, that π does contain at least one occurrence of the rule (Id) of the aforementioned form. We will prove Lemma 7 by induction over all sequents Λi ` µi in π that are derived from a sequent of the form x ¯.(E L ⊗ C L ) ` x ¯.(E L ⊗ C L ). For this proof, we make the assumption that for every such sequent Λi ` µi , Λi contains the logical reading of a CHR state hEi ; Ci i. We consider each comma ”,” in Λi to be a ”⊗” conjunction (which is logically equivalent). We then define ∃¯ xi .(EiL ⊗ CiL ) as the largest sub-formula of Λi , modulo commutativity, that can be read as the logical reading of a CHR state. Under the statet conditions, ∃¯ xi .(EiL ⊗ CiL ) is unique. Let ˜ ˜ Λi be the formula for which Λi = Λi ⊗ ∃¯ xi .(EiL ⊗ CiL ). We will inductively prove the assumption given below. For each sequent Λi ` µi in π that is derived from a sequent of the form x ¯.(E L ⊗ C L ) ` x ¯.(E L ⊗ C L ), one of the following propositions holds:

(i) It is true that: !LCT ⊗ ∃¯ xi .(EiL ⊗ CiL ) ` µi (ii) There is a formula ρ≤1 ∈ Π and a substitution σi such that: “ ” ρ = ∀ !DL ⊗ F L ( ∃¯ y .H L and !LCT ⊗ CiL ` DL σi !LCT ⊗ (Π − {ρ})



and

F σi ⊆ Ei L

⊗ ∃¯ xi , y¯((Ei − F σ) ⊗

and

CiL

⊗ D L ) ` µi

Induction start. The (Identity) rule: x ¯.(E L ⊗ C L ) ` x ¯.(E L ⊗ C L )

(Id)

In the introduced sequent, Λi equals µi . Consequently, proposition (i) holds.

Induction step. The (Cut) rule is irrelevant because we required π to be cut-free. For most rules of inference, the contained logical reading of a CHR state x ¯i .(EiL ⊗ CiL ) is the same in the pre-conditional as in the postconditional sequent. Hence, if the former satisfies our assumption, the latter consequently satisfies it as well. This is obviously the case for the rules of inference (Exchange) and (L⊗) as x ¯i .(EiL ⊗ CiL ) is constructed modulo commutativity and equivalence of ”,” and ”⊗”. It is also the case for the rules of inference (L&1 ), (L&2 ), (R&), (L⊕), (R⊕1 ), (R⊕2 ), (L∀), (R∀), (L∃), and (L1) because the very structure of x ¯i .(EiL ⊗ CiL ) implies that in none of these cases, the changed part of the sequent can be a sub-formula thereof. As we required that all atomic built-in constraints are introduced banged, the respective changed part of the sequents in (Dereliction), (Contraction) and (Weakening) have to be subformulae of !LCT and thus, x ¯i .(EiL ⊗ CiL ) does not change on the application of those rules either. The (R⊗) rule: Γ `α ∆`β (R⊗) Γ, ∆ ` α ⊗ β For this rule, we distinguish between two cases: If none of the sequents Γ ` α and ∆ ` β satisfies proposition (ii) – i.e. either both of them satisfy proposition (i) or one of them is not derived from a sequent of the form x ¯.(E L ⊗ C L ) ` x ¯.(E L ⊗ C L ) –, we can show that Γ, ∆ ` α ⊗ β satisfies proposition (i). If one or both of the pre-conditional sequents satisfy proposition (ii) then Γ, ∆ ` α ⊗ β satisfies proposition (ii). The (L() rule: Γ ` α β, ∆ ` γ (R⊗) Γ, α ( β, ∆ ` γ Again, we distinguish several cases. If Γ ` α satisfies proposition (ii) (case 1) we can easily show that Γ, α ( β, ∆ ` γ does so as well. Otherwise (case 2), α ( β is either a sub-formula of Π (case 2.1) or of !LCT (case 2.2). In case 2.1, it follows from Lemma 1 that Γ, α ( β, ∆ `

γ satisfies proposition (ii). In case 2.2, we can tell that !LCT, Γ, ∆ ` γ. Consequently, Γ, α ( β, ∆ ` γ will satisfy either proposition (ii) or (i), depending on which one is satisfied by β, ∆ ` γ.

C.4

Proof of the Completeness Theorem

With Lemma 7 we can now finally prove Theorem 2. The theorem reads as follows: If S0 is an initial CHR state and S¯n is a disjunction of states, such that “ ” !LCT, P L ` ∀ S0L ( S¯nL then there is a disjunction of states S¯ν with a finite derivation S0 7→∗ S¯ν such that !LCT ` S¯νL ( S¯nL

Proof of Theorem 2 We will transform our sequent into an equivalent form that is easier to work with. From Lemma 5 we conclude that “ ” !LCT, P L ` ∀ S0L ( S¯nL is equivalent to !LCT, P L , S0L ` S¯nL which according to Lemma 4 is in turn equivalent to a sequent !LCT, Π0⊗ , S0L ` S¯nL where Π0 ≡ Π0 · R≤1 and |Π0 | < ∞. We will now show that we can sequentially apply CHR∨ transitions to the initial state S0 , such that for each subsequent disjunction of states S¯i there is a multiset Πi ⊆ Π0 such that !LCT, Πi⊗ , S¯iL ` S¯nL We will furthermore show that after a finite number of such transitions we will reach a state S¯ν for which !LCT ` S¯νL ( S¯nL holds.

Disjunctions

of States: Lemma 6 implies that a sequent !LCT, Πi⊗ , SiL ⊕ T¯iL ` S¯nL is provable iff both !LCT, Πi⊗ , SiL ` S¯nL and !LCT, Πi⊗ , T¯iL ` S¯nL are provable. Therefore, we will assume w.l.o.g. that every disjunction of states S¯i consists of only one state which we denote as Si . The correctness for disjunctions of states S¯i follows from Lemma 6. Solve: If a state Si is of the form Si = hE ∪ {C}; Di, we can apply the Solve transition: Si = hE ∪ {C}; Di 7→Solve hE; C ∧ Di = Si+1 L Obviously, the logical reading of these two states is equal: SiL = Si+1 = L E L ⊗ C L ⊗ DL . From this follows that !LCT, Πi⊗ , Si+1 ` S¯nL . We define Πi+1 = Πi .

Split: If a state Si is of the form Si = hG ∪ {H1 ∨ H2 }; >i, we can apply the Split transition: Si = hG ∪ {H1 ∨ H2 }, >i 7→Split hG ∪ H1 , >i ∨ hG ∪ H2 , >i = S¯i+1 L In this case, SiL is of the form SiL = GL ⊗ (H1L ⊕ H2L ) and S¯i+1 = L L L L L L (G ⊗ H1 ) ⊕ (G ⊗ H2 ). We can show that S0 and S¯1 are logically L equivalent; therefore, !LCT, Πi⊗ , S¯i+1 ` S¯nL . We define Πi+1 = Πi .

Simplify: If a state Si is of the form Si = hE; Ci with local variables x¯ and !LCT, Πi⊗ , S¯iL ` S¯nL holds, we apply Lemma 7 and distinguish two cases: (i) It is true that: L L !LCT ⊗ ∃¯ xΛ .(EΛ ⊗ CΛ ) ` S¯n

(ii) There is a formula ρ≤1 ∈ Πi and a substitution σ such that “ ” (!ρ) =!∀ !DL ⊗ F L ( ∃¯ y .H L !LCT ⊗ C L ` DL σ

and L

Fσ ⊆ E

L CΛ

and

L

∃¯ x, y¯((EΛ − F σ) ⊗ ⊗ D σ) L L ¯ ¯ Case (i) implies that !LCT ` S2 ( Sn which means that in this case the theorem is proven. In case (ii), a CHR rule (F ⇔ D|H) with variables x ¯ has to be applicable, . in the sense that CT |= ∀(C → ∃¯ x(F = E ∧ D)). Thus, we can apply the solve transition: . Si = hE ∪ G; Ci 7→ hH ∪ G; (F = E) ∧ D ∧ Ci = Si+1 Furthermore, we set Πi+1 = Πi − {ρ} and conclude that ⊗ L !LCT, Πi+1 , Si+1 ` S¯nL is provable iff !LCT, Πi⊗ , SiL ` S¯nL .

Finiteness of the derivation: If case (i) does not apply, we can continue to apply the Split, Solve and Simplify transitions. We have shown that for every state Si that we reach in this manner, there is a finite multiset Πi such that !LCT, Πi⊗ , SiL ` S¯nL is provable. According to Lemmas 4 and 5, this is equivalent to !LCT, P L ` ∀(SiL ( S¯nL ) However, the cardinality of Πi⊗ is finite and decreases with each application of the Simplify transition. Therefore and according to Lemma 7, after a finite number of derivation steps, case (i) has to occur. This proves the theorem.