Proof planning is a technique for guiding the search for a proof in automated theorem proving. A proof plan is an outline or plan of a proof. To prove a conjecture, proof planning first constructs the proof plan for a proof and then uses it to guide the construction of the proof itself.
Common patterns in proofs are identified and represented in computational form as general-purpose tactics, ie programs for directing the proof search process. These tactics are then formally specified with methods using a meta-language. Standard patterns of proof failure and appropriate patches to the failed proofs attempts are represented as critics. To form a proof plan for a conjecture the proof planner reasons with these methods and critics. The proof plan consists of a customised tactic for the conjecture, whose primitive actions are the general-purpose tactics. This customised tactic directs the search of a tactic-based theorem prover.
For a general, informal introduction to proof planning see: Bundy, A. A Science of Reasoning, in `Computational Logic: Essays in Honor of Alan Robinson', eds Lassez J-L & Plotkin, G., MIT Press, pp 178-198, 1991. Proof planning was first introduced in: Bundy, A. The use of explicit plans to guide proofs Proceedings of CADE-9, eds Lusk, E. and Overbeek, R., Springer-Verlag Lecture Notes in Computer Science No. 310, 1988, pp 111--120. An earlier piece of work that led to the development of proof planning was the use of meta-level inference to guide equation solving, implemented in the Press system (see `Solving symbolic equations with Press' (Bundy, A., Sterling, L., Byrd, L, O'Keefe, R. and Silver, B.) J. Symbolic Computation, Vol. 7, pp 71--84, 1989.)
Yes, in the Oyster/Clam system at Edinburgh and the Omega system at Saarbrücken. Clam is the proof planner. It constructs a customised tactic for a conjecture and Oyster executes the tactic. An introductory reference to Oyster/Clam is: Bundy, A., van Harmelen, F., Horn, C. and Smaill, A., The Oyster-Clam system, in the Proceedings of the 10th International Conference on Automated Deduction, Springer-Verlag, eds Stickel, M. E., pp647-648, 1990. Here is the web page of the Omega Group at Saarbrücken.
In principle, Clam could be interfaced to any tactic-based theorem prover. We are currently testing this assertion by interfacing Clam to the Cambridge HOL theorem prover. Here is the web page of this project.
Yes. We have had some success applying proof planning to game playing (Bridge and Go) and to configuration problems. It is potentially applicable wherever there are common patterns of reasoning. Proof planning can be used to match the problem to the reasoning method in a process of meta-level reasoning. Proof planning gives a clean separation between the factual and search control information, which facilitates their independent modification.
Here is a pointer to the research on automating bridge done in my research group. An introduction to our work on configuration is: Lowe, H., Pechoucek, M. and Bundy, A. Proof Planning and Configuration, in "Proceedings of the Ninth Exhibition and Symposium on Industrial Applications of Prolog", 1996.
Rippling is a key method in our proof plans for induction. It is also useful in non-inductive domains. However, you can certainly envisage a proof planning system which did not contain a rippling method (the Saarbrücken Omega system, for instance) and you can envisage using rippling, eg as a tactic, in a non-proof planning system. So there is no necessary connection. For more on rippling see the rippling FAQ.
Yes, from example proofs. We have two PhD projects on this. One automated learning equation solving methods from examples. One automated learning inductive proof methods from examples. Both used forms of explanation based generalisation. The hardest aspect of this is coming up with the key meta-level concepts to describe the method preconditions. An example of such a meta-level concept is wave-front from rippling. We have not made much progress on automating the learning of these.
Good references to this work are: Silver, B. Meta-level inference: Representing and Learning Control Information in Artificial Intelligence, North Holland, 1985; and Desimone R. Explanation-Based Learning of Proof Plans in "Machine and Human Learning", Kogan Page, eds. Kodratoff, Y. and Hutchinson, A, 1989.
Not to my knowledge. This sounds like a promising project. Some of our methods do correspond to the kinds of proof techniques used by experts in the domain. Often students are expected to pick these techniques up by generalising examples. One difficulty with this is that students may not know the meta-level concepts from which the method preconditions are formed. This will prevent them constructing a sufficiently general method. Explicit teaching, especially explanations of the meta-level concepts, may help.
This is an art similar to the skill used by a good mathematics teacher when analysing a student's proof or explaining a new method of proof to a class. The key is identifying the appropriate meta-level concepts to generalise from particular examples. Armed with the right concepts, standard inductive learning techniques can form the right generalisation (see the answer about learning new proof plans for more information).
In certain circumstances proof critics can suggest an appropriate patch to a partial proof plan. Suppose the preconditions of a method succeed, but this method is unpacked into a series of sub-methods one of which fails, ie the preconditions of the sub-method fail. Critics are associated with some of these patterns of failure. For instance, one critic may fire if the first two preconditions of a method succeed, but the last one fails. It will then suggest an appropriate patch for this kind of failure, eg suggest the form of a missing lemma, suggest generalising the conjecture. The patch is instituted and proof planning continues.
The original critics paper is Ireland, A. The use of planning critics in mechanizing inductive proofs in International Conference on Logic Programming and Automated Reasoning (LPAR 92), St. Petersburg, Lecture Notes in Artificial Intelligence No. 624, ed Voronkov, A., Springer-Verlag", pp178-189, 1992. A more recent paper is Ireland, A. and Bundy, A., Productive use of failure in inductive proof, JAR, vol. 16, no. 1-2, pp79-111, 1996,
In other circumstances, a subgoal may be reached to which no method or critic is applicable. It may be possible to back-up to a choice point in the search, ie a place where two or more methods or critics were applicable. However, the search space defined by the methods and critics is typically much smaller than the search space defined by the object-level rules and axioms; that is both the strength and the weakness of proof planning. The total search space is cropped to the portion where the proof is most likely to occur -- reducing the combinatorial explosion, but losing completeness. It is always possible to regain completeness by supplementing the methods with a default, general-purpose exhaustive search method, but some would regard this as a violation of the spirit of proof planning.
Since general-purpose proof plans represent common patterns in proofs then, by definition, they cannot discover new kinds of proof. This limitation could be overcome in several ways. One would be to include a default method which invoked some general search technique. This might find a new kind of proof by accident. Another might be to have meta-methods which constructed new methods. For instance, a method for one domain might be applied to another by generalising its preconditions. [Often new departures come in mathematics when mathematicians switch from one area to another bringing their proof methods with them.] Or a method might be learnt from an example proof (see the answer about learning new proof plans for more information). Proof plans might, for instance, be learnt from proofs constructed by general search.
For the foreseeable future theorem provers will require human interaction to guide the search for non-trivial proofs. Fortunately, proof planning is also useful in interactive theorem provers. Proof plans facilitate the hierarchical organisation of a partial proof, assisting the user to navigate around it and understand its structure. They also provide a language for chunking the proof and for describing the interrelation between the chunks. Interaction with a semi-automated theorem prover can be based on this language. For instance, the user can: ask why a proof method failed to apply; demand that a heuristic precondition is overridden; use the analysis from proof critics to patch a proof; etc.
The XBarnacle system is an semi-automated theorem prover based on proof planning. An introductory paper is: Lowe, H. and Duncan, D. XBarnacle: Making theorem provers more accessible Proceedings of CADE-14, McCune, W., Springer-Verlag LNAI 1249, pp 404-408, 1997.
Not if the recommended methodology is adopted. The science of reasoning paper specifies a set of criteria for assessing proof plans. These include generality and parsimony, which discourage the creation of ad hoc methods designed to guide particular theorems. Rather, they encourage the design of a few, general-purpose methods which guide a wide range of theorems. The expectancy criterion promotes the association of a method with an explanation of why it works. This discourages the design of methods which often succeed empirically, but for poorly understood reasons. Of course, these criteria are a matter of degree, so poor judgement may produce methods which other researchers regard as ad hoc. The criteria of proof planning then provide a basis for other researchers to criticise such poor judgement.
The properties of a problem that indicate that proof planning might be a good solution are:
The key problem is to identify the tactics and their specifications. This is usually done by studying successful human problem solving and extracting the tactics. Sometimes there are texts describing the tactics, eg in bridge and similar games. Sometimes knowledge acquisition techniques, like those used in expert systems, are needed, eg analysis of problem solving protocols, exploratory interviews with human experts.
Back to MRG homepage.