E.W.Ayers

Edward William Ayers
A Tool for Producing Verified, Explainable Proofs.

Abstract

Mathematicians are reluctant to use interactive theorem provers. In this thesis I argue that this is because proof assistants don't emphasise explanations of proofs; and that in order to produce good explanations, the system must create proofs in a manner that mimics how humans would create proofs. My research goals are to determine what constitutes a human-like proof and to represent human-like reasoning within an interactive theorem prover to create formalised, undersandable proofs. Another goal is to produce a framework to visualise the goal states of this system.

To demonstrate this, I present HumanProof: a piece of software built for the Lean 3 theorem prover. It is used for interactively creating proofs that resemble how human mathematicians reason. The system provides a visual, hierarchical representation of the goal and a system for suggesting available inference rules. The system produces output in the form of both natural language and formal proof terms which are checked by Lean's kernel. This is made possible with the use of a structured goal state system which interfaces with Lean's tactic system which is detailed in Chapter 3.

In Chapter 4, I present the subtasks automation planning subsystem, which is used to produce equality proofs in a human-like fashion. The basic strategy of the subtasks system is break a given equality problem in to a hierarchy of tasks and then maintain a stack of these tasks in order to determine the order in which to apply equational rewriting moves. This process produces equality chains for simple problems without having to resort to brute force or specialised procedures such as normalisation. This makes proofs more human-like by breaking the problem into a hierarchical set of tasks in the same way that a human would.

To produce the interface for this software, I also created the 'widgets' system for Lean 3. This system is detailed in Chapter 5. The widgets system utilises Lean's metaprogramming framework to allow users to write their own interactive, web-based user interfaces to display within the VSCode editor and in an online web-editor. This is also used to create interactive pretty-printing of expressions. The entire tactic state is available to the rendering engine, and hence expression structure and types of subexpressions can be explored interactively. The widgets system also allows the user interface to interactively edit the proof document, enabling a truly interactive modality for creating proofs; human-like or not.

In Chapter 6, the system is evaluated by asking real mathematicians about the output of the system, and what it means for a proof to be understandable to them. The user group study asks participants to rank and comment on proofs created by HumanProof alongside natural language and pure Lean proofs. The study finds that participants generally prefer the HumanProof format over the Lean format. The verbal responses collected during the study indicate that providing intuition and signposting are the most important properties of a proof that aid understanding.

Contents

Chapter 1
Introduction

My first contact with the ideas of formalised mathematics came from reading the anonymously authored QED Manifesto [Ano94[Ano94]AnonymousThe QED manifesto (1994)Automated Deduction--CADE]In this thesis, brief details on citations appear in the sidebar, a full bibliography is provided at the end of the document. which envisions a 'QED system' in which all mathematical knowledge is stored in a single, computer-verified repository. This idea dizzied me: perhaps review of mathematics will amount to remarking on style and interest, with checking of proofs performed automatically from a machine readable document.

The general term that I will use for software that works towards this vision is proof assistant or Interactive Theorem Prover ITP. A proof assistant at its most general is a piece of software that allows users to create and verify mathematical proofs. In Section 2.1 I will provide more detail how proof assistants are generally constructed.

In 2007, Freek Wiedijk [Wie07[Wie07]Wiedijk, FreekThe QED manifesto revisited (2007)Studies in Logic, Grammar and Rhetoric] pronounced the QED project to have "not been a success (yet)", citing not enough people working on formalised mathematics and the severe differences between formalised and 'real' mathematics, both at a syntactic level (formalised mathematics resembles source code) and at a foundational level (formalised mathematics is usually constructive and procedural as opposed to classical and declarative). Similarly, Alan Bundy [Bun11[Bun11]Bundy, AlanAutomated theorem provers: a practical tool for the working mathematician? (2011)Annals of Mathematics and Artificial Intelligence] notes that although mathematicians have readily adopted computational tools such as [Knu86[Knu86]Knuth, Donald E.The TeXbook (1986)] and computer algebra systemsA computer algebra system (CAS) is a tool for symbolic manipulation of formulae and expressions, without necessarily having a formalised proof that the manipulation is sound. Examples of CASes include Maple and Mathematica., computer aided proving has had very little impact on the workflow of a working mathematician. Bundy cites several reasons for this which will be discussed in Section 1.1.

Now, a decade later, the tide may be turning. In 2021, proof assistants are pretty good. There are several well-supported large-scale systems such as Isabellehttps://isabelle.in.tum.de/ [Pau89], Coqhttps://coq.inria.fr/ [Coq], Leanhttps://leanprover.github.io/ [MKA+15], HOL Lighthttps://www.cl.cam.ac.uk/~jrh13/hol-light/ [Har09], Agdahttps://wiki.portal.chalmers.se/agda/pmwiki.php [Nor08], Mizarhttp://mizar.org/ [GKN15], PVShttps://pvs.csl.sri.com/ [SORS01] and many more. These systems are used to define and prove mathematical facts in a variety of logics (e.g. FOL, HOL, CIC, univalent foundations). These systems are bridged to powerful, automated reasoning systems (e.g. Vampirehttps://vprover.github.io [RV02], Z3https://github.com/Z3Prover/z3 [MB08], Ehttps://www.eprover.org [SCV19] and Leo-IIIhttp://page.mi.fu-berlin.de/lex/leo3/ [SB18a]. Within these systems, many theorems big and small (4-colour theorem [Gon05], Feit-Thompson theorem [GAA+13], Kepler conjecture [HAB+17]) have been proved in a variety of fields, accompanied by large mathematical libraries (Isabelle's Archive of Formal Proofshttps://www.isa-afp.org/, Lean's mathlibhttps://github.com/leanprover-community/mathlib, Coq's Mathematical Componentshttps://math-comp.github.io/, Mizar's Formalized Mathematicshttps://fm.mizar.org/) whose intersection with undergraduate and research level mathematics is steadily growingSee, for example, the rate of growth of the Lean 3 mathematical library..

However, in spite of these advances, we are still yet to see widespread adoption of ITP by mathematicians outside of some (growing) cliques of enthusiasts. In this thesis I wish to address this problem through engaging with how mathematicians use and understand proofs to create new ways of interacting with formalised proof. Let's first expand on the problem a little more and then use this to frame the research questions that I will tackle for the remainder of the thesis.

1.1. Mathematicians and proof assistants

Here I offer 3 possible explanations for why mathematicians have not adopted proof assistants. Many have commented on these before: Bundy [Bun11] summarises the main challenges well.

1. Differing attitudes towards correctness and errors. Mathematicians don't worry about mistakes in the same way as proof assistants doI will present some evidence for this in Section 2.6.. Mathematicians care deeply about correctness, but historically the dynamics determining whether a result is considered to be true are also driven by sociological mechanisms such as peer-review; informal correspondences; 'folk' lemmas and principles; reputation of authors; and so on [MUP80[MUP80]de Millo, Richard A; Upton, Richard J; et al.Social processes and proofs of theorems and programs (1980)The mathematical intelligencer]. A proxy for trustworthiness of a result is the number of other mathematicians that have scrutinized the work. That is, if the proof is found on an undergraduate curriculum, you can predict with a high degree of confidence that any errors in the proof will be brought to the lecturer's attention. In contrast, a standalone paper that has not yet been used for any subsequent work by others is typically treated with some degree of caution.

2. High cost. Becoming proficient in an ITP system such as Isabelle or Coq can require a lot of time. And then formalising an area of maths can take around ten times the amount of time required to write a corresponding paper or textbook on the topic. This time quickly balloons if it is also necessary to write any underlying assumed knowledge of the topic (e.g., measure theory first requires real analysis). This 'loss factor' of the space cost of developing a formalised proof over that of a natural language proof was first noted by de Bruijn in relation to his AUTOMATH prover [DeB80[DeB80]De Bruijn, Nicolaas GovertA survey of the project AUTOMATH (1980)To H.B.Curry: Essays on Combinatory Logic,Lambda Calculus and Formalism]. De Bruijn estimates a factor of 20 for AUTOMATH, and Wiedijk later estimates this factor to be closer to three or four in Mizarhttps://mizar.org [Wie00[Wie00]Wiedijk, FreekThe de Bruijn Factor (2000)]. There are costs other than space too, the main one of concern here being the time to learn to use the tools and the amount of work required per proof.

3. Low reward. What does a mathematician have to gain from formalising their research? In many cases, there is little to gain other than confirming something the researcher knew to be true anyway. The process of formalisation may bring to light 'bugs' in the work: perhaps there is a trivial case that wasn't accounted for or an assumption needs to be weakened. Sometimes the reward is high enough that there is a clear case for formalisation, particularly when the proof involves some computer-generated component. This is exemplified by Hales' proof [Hal05[Hal05]Hales, Thomas CA proof of the Kepler conjecture (2005)Annals of mathematics] and later formalised proof [HAB+17[HAB+17]Hales, Thomas C; Adams, Mark; et al.A formal proof of the Kepler conjecture (2017)Forum of Mathematics, Pi] of the Kepler conjecture. The original proof involved lengthy computer generated steps that were difficult for humans to check, and so Hales led the Flyspeck project to formalise it, taking 21 collaborators around a decade to complete. Another celebrated example is Gonthier's formalisation of the computer-generated proof of the four-colour theorem [Gon05[Gon05]Gonthier, GeorgesA computer-checked proof of the four colour theorem (2005)]. Formalisation is also used regularly in formalising expensive hardware and safety-critical computer software (e.g., [KEH+09[KEH+09]Klein, Gerwin; Elphinstone, Kevin; et al.seL4: Formal verification of an OS kernel (2009)Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles, Pau98[Pau98]Paulson, Lawrence CThe inductive approach to verifying cryptographic protocols (1998)Journal of computer security]).

The economics of the matter are such that the gains of using ITP are too low compared to the benefits for the majority of cases. Indeed, since mathematicians have a different attitude to correctness, there are sometimes no benefits to formalisation. As ITP developers, we can improve the situation by either decreasing the learning cost or increasing the utility.

How can we make ITP easier to learn? One way is to teach it in undergraduate mathematics curricula (not just computer science). An example of such a course is Massot's Introduction aux mathรฉmatiques formalisรฉeshttps://www.imo.universite-paris-saclay.fr/~pmassot/enseignement/math114/ taught at the Universitรฉ Paris Sud. Another way is to improve the usability of the user interface for the proof assistant; I will consider this point in more detail in Section 2.5.

How can we increase the utility that mathematicians gain from using a proof assistant? In this thesis I will argue that one way to help with these three issues is to put more emphasis on interactive theorem provers providing explanations rather than a mere guarantee of correctness. We can see that explanations are important because mathematicians care about new proofs of old results that treat the problem in a new way. Proofs from the Book [AZHE10[AZHE10]Aigner, Martin; Ziegler, Gรผnter M; et al.Proofs from the Book (2010)] catalogues some particularly lovely examples of this.

Can computers also provide informal proofs with more emphasis on explanations? Gowers [Gow10[Gow10]Gowers, W. T.Rough structure and classification (2010)Visions in Mathematics ยง2] presents an imagined interaction between a mathematician and a proof assistant of the future.

Mathematician. Is the following true? Let . Then for sufficiently large, every set of size at least contains a subset of the form ?

Computer. Yes. If is non-empty, choose and set .

M. All right all right, but what if is not allowed to be zero?

C. Have you tried induction on , with some tending to zero?

M. That idea is no help at all. Give me some examples please.

C. The obvious greedy algorithm gives the set

An interesting feature of this conversation is that the status of the formal correctness of any of the statements conveyed by the computer is not mentioned. Similar notions are brought to light in the work of Corneli et al. [CMM+17[CMM+17]Corneli, Joseph; Martin, Ursula; et al.Modelling the way mathematics is actually done (2017)Proceedings of the 5th ACM SIGPLAN International Workshop on Functional Art, Music, Modeling, and Design] in their modelling of informal mathematical dialogues and exposition.

Why not have both explanatory and verified proofs? I suspect that if an ITP system is to be broadly adopted by mathematicians, it must concisely express theorems and their proofs in a way similar to that which a mathematician would communicate with fellow mathematicians. This not only requires constructing human-readable explanations, but also a reimagining of how the user can interact with the prover.

In this thesis, I will focus on problems that are considered 'routine' for a mathematician. That is, problems that a mathematician would typically do 'on autopilot' or by 'following their nose' For example, showing that from the ring axioms.. I choose to focus on this class of problem because I believe it is an area where ITP could produce proofs that explain why they are true rather than merely provide a certificate of correctness. The typical workflow when faced with a problem like this is to either meticulously provide a low-level proof or apply automation such as Isabelle's auto, or an automation orchestration tool such as Isabelle's Sledgehammer [BN10[BN10]Bรถhme, Sascha; Nipkow, TobiasSledgehammer: judgement day (2010)International Joint Conference on Automated Reasoning]. In the case of using an automation tacticBroadly, a tactic is a program for creating proofs. I will drill down on how this works in Chapter 2. like auto the tactic will either fail or succeed, leaving the user with little feedback on why the result is true. There are some tools for producing intelligible proofs from formalised ones, for example, the creation of Isar [Wen99[Wen99]Wenzel, MakariusIsar-a generic interpretative approach to readable formal proof documents (1999)TPHOLs] proofs from Sledgehammer by Blanchette et al. [BBF+16[BBF+16]Blanchette, Jasmin Christian; Bรถhme, Sascha; et al.Semi-intelligible Isar proofs from machine-generated proofs (2016)Journal of Automated Reasoning]. However, gaining an intuition for a proof will be easier if the proof is generated in a way that reflects how a human would solve the problem, and so translating a machine proof to a proof which a human will extract meaning from is an uphill battle.

1.2. Research questions

The general arc of this thesis is to help make ITP systems more appealing to mathematicians. In the context of this arc, there arise some key research questions that I seek to study.

Question 1. What constitutes a human-like, understandable proof?

Objectives:

  • Identify what 'human-like' and 'understandable' mean to different people.

  • Distinguish between human-like and machine-like proofs in the context of ITP.

  • Merge these strands to determine a working definition of human-like proof.

Question 2. How can human-like reasoning be represented within an interactive theorem prover to produce formalised, understandable proofs?

Objectives:

  • Form a calculus of representing goal states and inference steps that act at the abstraction layer that a human uses when solving proofs.

  • Create a system for producing natural language proofs from this calculus.

  • Evaluate the resulting system by performing a study on real mathematicians.

Question 3. How can this mode of human-like reasoning be presented to the user in an interactive, multimodal way?

Objectives:

  • Investigate new ways of interacting with proof objects.

  • Make it easier to create novel graphical user interfaces (GUIs) for interactive theorem provers.

  • Produce an interactive interface for a human-like reasoning system.

1.3. Contributions

This thesis presents a number of contributions towards the above research questions:

  1. An abstract calculus for developing human-like proofs (Chapter 3).

  2. An interface between this abstraction layer and a metavariable-driven tactic state, as is used in theorem provers such as Coq and Lean, producing formally verified proofs (Chapter 3 and Appendix A).

  3. A procedure for generating natural language proofs from this calculus (Section 3.6).

  4. The 'subtasks' algorithm, a system for automating the creation of chains of equalities and inequalities. This work has been published in [AGJ19[AGJ19]Ayers, E. W.; Gowers, W. T.; et al.A human-oriented term rewriting system (2019)KI] (Chapter 4).

  5. A graphical user interface framework for interactive theorem provers (Chapter 5). This has been published in [AJG21[AJG21]Ayers, E. W.; Jamnik, Mateja; et al.A graphical user interface framework for formal verification (2021)Interactive Theorem Proving].

  6. An implementation of all of the above contributions in the Lean 3 theorem prover. Link to source code coming soon.

  7. A study assessing the impact of natural language proofs with practising mathematicians (Chapter 6).

1.4. Structure of this document

In Chapter 2, I will provide an overview of the background material needed for the remaining chapters. Next, in Chapter 3, I introduce the HumanProof software for producing human-like proofs within the Lean proof assistant. I provide motivation of the design in Section 3.1, an overview of the system in Section 3.2 and then dive in to the details of how the system is designed, including the natural-language generation engine in Section 3.6. Chapter 4 discusses a system for producing equational reasoning proofs called the subtask algorithm. Chapter 5 details the ProofWidgets system, which is used to produce the user interface of HumanProof. Chapter 6 provides the design and results of a user study that I conducted on mathematicians to determine whether HumanProof really does provide understandable proofs. Finally, Chapter 7 wraps things up with some reflection on my progress and a look ahead to future work.

There are also four appendices:

  • Appendix A presents some additional technical detail on interfacing HumanProof with Lean.

  • Appendix B is a tutorial for using ProofWidgets.

  • Appendix C is some additional detail on the algorithms used by ProofWidgets.

  • Appendix D provides supplementary material for Chapter 6.

1.5. Previously published work and collaboration

The work in Chapter 3 is my own, although the box calculus presented is inspired through many sessions of discussion with W.T. Gowers and the design of Gowers' previous collaboration with Ganesalingam [GG17[GG17]Ganesalingam, Mohan; Gowers, W. T.A fully automatic theorem prover with human-style output (2017)J. Automated Reasoning]. More on this will be given when it is surveyed in Section 2.7 and Section 3.3.5.

The work in Chapter 4 is previously published at KI 2019 [AGJ19[AGJ19]Ayers, E. W.; Gowers, W. T.; et al.A human-oriented term rewriting system (2019)KI].

The work presented in Chapter 5 is pending publication in ITP 2021 [AJG21[AJG21]Ayers, E. W.; Jamnik, Mateja; et al.A graphical user interface framework for formal verification (2021)Interactive Theorem Proving] and is also merged in to the Lean 3 community repositoryhttps://github.com/leanprover-community/lean. The design is strongly influenced by Elm and React; however, there are a number of novel architectural contributions necessitated by the unique challenges of implementing a portable framework within a proof assistant.

The user study presented in Chapter 6 is all my own work with a lot of advisory help from Mateja Jamnik, Gem Stapleton and Aaron Stockdill on designing the study.

1.6. Acknowledgements

I thank my supervisors W. T. Gowers and Mateja Jamnik for their ideas, encouragement and support and for letting me work on such a wacky topic. I thank Gabriel Ebner and Brian Gin-ge Chen for reading through my ProofWidgets PRs. I thank Patrick Massot, Kevin Buzzard and the rest of the Lean Prover community for complaining about my PRs after the fact. I thank Jeremy Avigad for taking the time to introduce me to Lean at the Big Proof conference back in 2017. I thank Bohua Zhan, Chris Sangwin, and Makarius Wenzel and many more for the enlightening conversations on automation for mathematicians at Big Proof and beyond. I thank Peter Koepke for being so generous in inviting me to Bonn to investigate Naproche/SAD with Steffan Frerix and Andrei Paskevich. I thank Larry Paulson and the ALEXANDRIA team for letting me crash their weekly meetings. I thank my parents for letting me write up in the house during lockdown.

I thank my friends and colleagues in the CMShttp://www.cms.cam.ac.uk. Andrew, Eric, Sammy P, Sven, Ferdia, Mithuna, Oliverร—2, Kasia, Sam O-T, Bhavik, Wojciech, ... all kinds of people. In parallel, the CLhttps://www.cst.cam.ac.uk: Chaitanya, Botty, Duo, Daniel, Aaron, Angeliki, Yiannos, Wenda, Zoreh.

I thank everyone in the world.

This research was supported by EPSRC and the Cantab Capital Institute for the Mathematics of Informationhttps://www.ccimi.maths.cam.ac.uk/.

1.6.1. Typesetting acknowledgements

I decided to typeset this thesis as HTML-first, print second. The digital copy may be found at edayers.com There is no anywhere, and so I would first like to apologise to Donald Knuth. The printed version of this thesis was generated by printing out the website version and concatenating.

I was able to create the thesis in this way thanks to many open-source projects. I will acknowledge the main ones here. Reacthttps://reactjs.org/, Gatsbyhttps://www.gatsbyjs.org/, Tachyonshttps://tachyons.io, PrismJShttps://prismjs.com. Thanks to Titus Woormerhttps://github.com/wooorm for remarkJShttp://github.com/remarkjs and also adding my feature request in less than 24 hours! The code font is PragmataProhttps://fsd.it/pp created by Fabrizio Schiavi. The style of the site is a modified version of the Edward Tufte Handout stylehttps://github.com/edwardtufte/tufte-css. The syntax colouring style is based on the VS theme by Andrew Lockhttps://andrewlock.net. I also use some of the vscode-iconshttps://github.com/microsoft/vscode-icons icons.

Chapter 2
Background

In this chapter I will provide a variety of background material that will be used in later chapters. Later chapters will include links to the relevant sections of this chapter. I cover a variety of topics:

  • Section 2.1 gives an overview of how proof assistants are designed. This provides some context to place this thesis within the field of ITP.

  • Section 2.2 contains some preliminary definitions and notation for types, terms, datatypes and functors that will be used throughout the document.

  • Section 2.3 contains some additional features of inductive datatypes that I will make use of in various places throughout the text.

  • Section 2.4 discusses the way in which metavariables and tactics work within the Lean theorem prover, the system in which the software I write is implemented.

  • Section 2.5 switches gears and surveys the existing work in the field of user interfaces for theorem provers. This is background to Chapter 5.

  • Section 2.6 asks what it means for a person to understand or be confident in a proof. This is used to motivate the work in Chapter 3 and Chapter 4. It is also used to frame the user study I present in Chapter 6.

  • Section 2.7 explores what the automated reasoning literature has to say on how to define and make use of 'human-like reasoning'. This includes a survey of proof planning (Section 2.7.2).

  • Section 2.8 surveys the topic of natural language generation of mathematical texts, used in Section 3.6.

2.1. The architecture of proof assistants, briefly

In this section I am going to provide an overview of the designs of proof assistants for non-specialist. You may safely skip this section if you are already familiar with them. The structure of this section is inspired by the viewpoint that Andrej Bauer expresses in a MathOverflow answer [Bau20[Bau20]Bauer, AndrejWhat makes dependent type theory more suitable than set theory for proof assistants? (2020)].

A typical architecture of a modern, full-fledged checker-style proof assistant is given in Figure 2.1.

Figure 2.1.

Schematic overview of a typical modern kernel-based proof assistant. In Lean 3, the metalanguage and vernacular boxes are unioned. The aim is to make all of the implementation code in the 'Kernel' box as small and easy to inspect as possible to reduce the chance of bugs and hence the chance of 'verifying' an invalid proof as correct. Further explanation on this figure can be found in Section 2.1.1

prover architecture diagram

The essential purpose of a proof assistant is to represent mathematical theorems, definitions and proofs in a language that can be robustly checked by a computer. This language is called the foundation language equipped with a set of derivation rules. The language defines the set of objects that formally represPbibent mathematical statements and proofs, and the inference rules and axioms provide the valid ways in which these objects can be manipulatedAt this point, we may raise a number of philosophical objections such as whether the structures and derivations 'really' represent mathematical reasoning. The curious reader may enjoy the account given in the first chapter of Logic for Mathematicians by J. Barkley Rosser [Ros53].[Ros53]Rosser, J. BarkleyLogic for Mathematicians (1953). Some examples of foundations are first-order logic (FOL)https://en.wikipedia.org/wiki/First-order_logic, higher-order logic (HOL)https://en.wikipedia.org/wiki/Higher-order_logic, and various forms of dependent type theory (DTT) [Mar84[Mar84]Martin-Lรถf, PerIntuitionistic type theory (1984), CH88[CH88]Coquand, Thierry; Huet, Gรฉrard P.The Calculus of Constructions (1988)Inf. Comput., PP89[PP89]Pfenning, Frank; Paulin-Mohring, ChristineInductively defined types in the Calculus of Constructions (1989)International Conference on Mathematical Foundations of Programming Semantics, Pro13[Pro13]The Univalent Foundations ProgramHomotopy Type Theory: Univalent Foundations of Mathematics (2013)].

A component of the software called the kernel checks proofs in the foundation. There are numerous foundations and kernel designs. Finding new foundations for mathematics is an open research area but FOL, HOL and DTT mentioned above are the most well-established for performing mathematics. I will categorise kernels as being either 'checkers' or 'builders'.

A 'checker' kernel takes as input a proof expression and outputs a yes/no answer to whether the term is a valid proof. An example of this is the Lean 3 kernel [MKA+15[MKA+15]de Moura, Leonardo; Kong, Soonho; et al.The Lean theorem prover (system description) (2015)International Conference on Automated Deduction].

A 'builder' kernel provides a fixed set of partial functions that can be used to build proofs. Anything that this set of functions accepts is considered as valid. This is called an LCF architecture, originating with Milner [Mil72[Mil72]Milner, RobinLogic for computable functions description of a machine implementation (1972), Gor00[Gor00]Gordon, MikeFrom LCF to HOL: a short history (2000)Proof, language, and interaction]. The most widely used 'builder' is the Isabelle kernel by Paulson [Pau89[Pau89]Paulson, Lawrence CThe foundation of a generic theorem prover (1989)Journal of Automated Reasoning].

Most kernels stick to a single foundation or family of foundations. The exception is Isabelle, which instead provides a 'meta-foundation' for defining foundations, however the majority of work in Isabelle uses the HOL foundation.

2.1.1. The need for a vernacular

One typically wants the kernel to be as simple as possible, because any bugs in the kernel may result in 'proving' a false statement. For the same reason, the foundation language should also be as simple as possible. However there is a trade-off between kernel simplicity and the usability and readability of the foundation language: if the machine-verified definitions and lemmas are tedious to read and write then the prover will not be adopted by users.

Proof assistant designers need to bridge this gap between a human-readable, human-understandable proof and a machine-readable, machine-checkable proof. A common approach is to use a second language called the vernacular (shown on Figure 2.1). The vernacular is designed as a human-and-machine-readable compromise that is converted to the foundation language through a process called elaboration (e.g., [MAKR15[MAKR15]de Moura, Leonardo; Avigad, Jeremy; et al.Elaboration in Dependent Type Theory (2015)CoRR]). The vernacular typically includes a variety of essential features such as implicit arguments and some form of type inference, as well as high-level programming features such as pattern matching. Optionally, there may be a compiler (see Figure 2.1) for the vernacular to also produce runnable code, for example Lean 3 can compile vernacular to bytecode [EUR+17[EUR+17]Ebner, Gabriel; Ullrich, Sebastian; et al.A metaprogramming framework for formal verification (2017)Proceedings of the ACM on Programming Languages].

I discuss some work on provers with the vernacular being a restricted form of natural language as one might find in a textbook in Section 2.8.2.

2.1.2. Programs for proving

Using this kernel for checking proofs and a vernacular structure for expressing theorems, we now need to be able to construct proofs of these theorems.

An Automated Theorem Prover (ATP) is a piece of software that produces proofs for a formal theorem statement automatically with a minimal amount of user input as to how to solve the proof, examples include Z3https://github.com/Z3Prover/z3, Ehttps://wwwlehre.dhbw-stuttgart.de/~sschulz/E/E.html and Vampirehttp://www.vprover.org/.

Interactive Theorem Proving (ITP) is the process of creating proofs incrementally through user interaction with a prover. I will provide a review of user interfaces for ITP in Section 2.5. Most proof assistants incorporate various automated and interactive theorem proving components.

Figure 2.2.

An example proof script from the Lean 3 theorem prover. The script proper are the lines between the begin and end keywords. Each line in the proof script corresponds to a tactic.

A common modality for allowing the user to interactively construct proofs is with the proof script (Figure 2.2), this is a sequence of textual commands, written by the user to invoke certain proving programs called tactics that manipulate a state representing a partially constructed proof. Some of these tactics my invoke various ATPs to assist in constructing proofs. Proof scripts may be purely linear as in Figure 2.2 or have a hierarchical structure such as in Isar [Wen99[Wen99]Wenzel, MakariusIsar-a generic interpretative approach to readable formal proof documents (1999)TPHOLs] or HiProof [ADL10[ADL10]Aspinall, David; Denney, Ewen; et al.Tactics for hierarchical proof (2010)Mathematics in Computer Science].

An alternative to a proof script is for the prover to generate an auxillary proof object file that holds a representation of the proof that is not intended to be human readable. This is the approach taken by PVS [SORS01[SORS01]Shankar, Natarajan; Owre, Sam; et al.PVS prover guide (2001)Computer Science Laboratory, SRI International, Menlo Park, CA] although I will not investigate this approach further in this thesis.

In the process of proving a statement, a prover must keep track of partially built proofs. I will refer to these representations of partially built proofs as development calculi. I will return to development calculi in Section 2.4.

2.1.3. Foundation

A foundation for a prover is built from the following pieces:

  1. A language: defining inductive trees of data that we wish to talk about and also syntax for these trees.

  2. The judgements: meta-level predicates over the above trees.

  3. The derivation rules: a generative set of rules for deriving judgements from other judgements.

To illustrate briefly, the language of simply typed lambda calculus would be expressed as in (2.3).

(2.3).

Example of a grammar. A and X are some sets of variables (usually strings of unicode letters)

๐‘ฅ, ๐‘ฆ, ๐‘ง ::= X -- variable
ฮฑ, ฮฒ ::= A | ฮฑ โ†’ ฮฒ -- type
๐‘ , ๐‘ก ::= ๐‘  ๐‘ก | ฮป (๐‘ฅ : ฮฑ), ๐‘  | X -- term
ฮ“ ::= โˆ… | ฮ“, (๐‘ฅ : ฮฑ) -- context

In (2.3), the purple greek and italicised letters (๐‘ฅ, ๐‘ฆ, ฮฑ, ...) here are called nonterminals. They say: "You can replace me with any of the items on the right-hand-side of my ::=". So, for example, "ฮฑ" can be replaced with either a member of A or "ฮฑ โ†’ ฮฒ". The words in the final column give a natural language noun to describe the 'type' of the syntax.

In general terms, contexts ฮ“ perform the role of tracking which variables are currently in scope. To see why contexts are needed, consider the expression ๐‘ฅ + ๐‘ฆ; its resulting type depends on the types of the variables ๐‘ฅ and ๐‘ฆ. If ๐‘ฅ and ๐‘ฆ are both natural numbers, ๐‘ฅ + ๐‘ฆ will be a natural number and similarly for integers or complex numbers. The correct interpretation of ๐‘ฅ + ๐‘ฆ depends on the context of the expression.

Next we define the judgements for our system in (2.4).

(2.4).

Judgements for an example lambda calculus foundation. ฮ“, ๐‘ก and ฮฑ may be replaced with expressions drawn from the grammar in (2.3)

ฮ“ โŠข ok
ฮ“ โŠข ๐‘ก : ฮฑ

Then write down the natural deduction rules (2.5) for inductively deriving these judgements.

(2.5).

Judgement derivation rules for the example lambda calculus (2.3). Each rule gives a recipe for creating new judgements: given the judgements above the horizontal line, we can derive the judgement below the line (substituting the non-terminals for the appropriate ground terms). In this way you can inductively produce judgements.


โˆ…-ok
โˆ… ok
ฮ“ ok
(๐‘ฅ : ฮฑ) โˆ‰ ฮ“

append-ok
[..ฮ“, (๐‘ฅ : ฮฑ)] ok
(๐‘ฅ : ฮฑ) โˆˆ ฮ“

var-typing
ฮ“ โŠข ๐‘ฅ : ฮฑ
ฮ“ โŠข ๐‘  : ฮฑ โ†’ ฮฒ
ฮ“ โŠข ๐‘ก : ฮฑ

app-typing
ฮ“ โŠข ๐‘  ๐‘ก : ฮฒ
x โˆ‰ ฮ“
ฮ“, (๐‘ฅ : ฮฑ) โŠข ๐‘ก : ฮฒ

ฮป-typing
ฮ“ โŠข (ฮป (๐‘ฅ : ฮฑ), ๐‘ก) : ฮฑ โ†’ ฮฒ

And from this point, it is possible to start exploring the ramifications of the system. For example: is ฮ“ โŠข ๐‘  : ฮฑ decidable?

Every proof assistant worth its salt needs to define the foundation in this way. These are usually written down in papers as a BNF grammar and a spread of gammas, turnstiles and lines as illustrated in (2.3), (2.4) and (2.5). Or as Steele calls it: Computer Science Metanotation [Ste17[Ste17]Steele Jr., Guy L.It's Time for a New Old Language (2017)].

In implementations of proof assistants, the foundation typically doesn't separate quite as cleanly in to the above pieces. The language is implemented with a number of optimisations such as de Bruijn indexing [deB72[deB72]de Bruijn, Nicolaas GovertLambda calculus notation with nameless dummies, a tool for automatic formula manipulation, with application to the Church-Rosser theorem (1972)Indagationes Mathematicae (Proceedings)] for the sake of efficiency. Judgements and rules are implicitly encoded in algorithms such as type checking, or appear in forms different from that in the corresponding paper.

In this thesis I will be using the Martin-Lรถf-style [Mar84[Mar84]Martin-Lรถf, PerIntuitionistic type theory (1984)] dependent type theory of Lean 3 as implemented by de Moura et al and formally documented by Carneiro [Car19[Car19]Carneiro, MarioLean's Type Theory (2019)]. A good introduction to mathematical formalisation with dependent type theory is the first chapter of the HoTT Book [Pro13[Pro13]The Univalent Foundations ProgramHomotopy Type Theory: Univalent Foundations of Mathematics (2013) ch. 1]. Other foundations are also available: Isabelle's foundation is two-tiered [Pau89[Pau89]Paulson, Lawrence CThe foundation of a generic theorem prover (1989)Journal of Automated Reasoning]: there is a meta-level foundation upon which many foundations can be implemented. A lot of the work in this thesis is independent of foundation and so I will try to indicate where I can how the contributions can be augmented to work in other foundations.

2.2. Preliminaries

This section contains a set of quick preliminary definitions for the concepts and notation that I will be using later. In this thesis I will be using a typed pseudolanguage which should be familiar to most functional programming enthusiasts. This pseudo-language is purely presentational; it will be abused in the name of clarity. If you are comfortable with functional programming and dependent type theory feel free to skip to Section 2.3.

2.2.1. Some notation for talking about type theory and algorithms

You can skip over this section if you are comfortable with dependently-typed functional programming.

The world is built of types and terms. New variables are introduced as "๐‘ฅ : A"; ๐‘ฅ is the variable and it has the type A. Lots of variables with the same type can be introduced as ๐‘ฅ ๐‘ฆ ๐‘ง : A. Types A B C : Type start with an uppercase letter and are coloured turquoise. Type is a special 'type of types'Of course, since this is just pseudocode, we do not have to worry about paradoxes as when choosing a prover foundation in Section 2.4.1.. Meanwhile terms start with a lowercase letter and term variables are purple and italicised. A โ†’ B is the function type. โ†’ is right associative which means that ๐‘“ : A โ†’ B โ†’ C should be read as ๐‘“ : A โ†’ (B โ†’ C). This is called a curried function, we may consider A and B to be the input arguments of ๐‘“ and C to be its return type. Given ๐‘Ž : A we may apply ๐‘“ to ๐‘Ž by writing ๐‘“ ๐‘Ž : B โ†’ C. Functions are introduced using maps-to notation (๐‘Ž : A) โ†ฆ (๐‘ : B) โ†ฆ ๐‘“ ๐‘Ž ๐‘. Write the identity function ๐‘ฅ โ†ฆ ๐‘ฅ as ๐Ÿ™ : X โ†’ X. Given ๐‘“ : A โ†’ B, ๐‘” : B โ†’ C, write function composition as ๐‘” โˆ˜ ๐‘“ : A โ†’ C. Function application is left associative, so ๐‘“ ๐‘Ž ๐‘ should be read as (๐‘“(๐‘Ž))(๐‘). The input types of functions may be optionally be given argument names, such as: (๐‘Ž : A) โ†’ (๐‘ : B) โ†’ C. We also allow 'dependent types' where the return value C is allowed to depend on these arguments: (๐‘Ž : A) โ†’ ๐’ž ๐‘Ž where ๐’ž : A โ†’ Type is a type-valued function.

  • Empty is the empty type.

  • Unit is the type containing a single element ().

  • Bool is the boolean type ranging over values true and false.

  • Option X or X? is the type taking values some ๐‘ฅ for ๐‘ฅ : X or none. some will usually be suppressed.

  • List X or X* is the type of finite lists of X. Given ๐‘ฅ ๐‘ฆ : X and ๐‘™โ‚ ๐‘™โ‚‚ : List X, we can write ๐‘ฅ :: ๐‘™โ‚ for list cons and ๐‘™โ‚ ++ ๐‘™โ‚‚ for concatenating two lists. For list construction and pattern matching, list spreads will be used. For example[..๐‘™โ‚, ๐‘ฅ, ๐‘ฆ, ..๐‘™โ‚‚] denotes the list formed by concatenating ๐‘™โ‚, [๐‘ฅ, ๐‘ฆ] and ๐‘™โ‚‚. Python-style list comprehensions are also used: [๐‘–ยฒ for ๐‘– in 1..20] is a list of the first 20 square numbers.

  • โ„• is the type of natural numbers. Individual numbers can be used as types: ๐‘ฅ : 3 means that ๐‘ฅ is a natural number taking any value ๐‘ฅ < 3, i.e, ๐‘ฅ โˆˆ {0,1,2}.

  • A ร— B is the type of tupleshttps://en.wikipedia.org/wiki/Tuple over A and B. Elements are written as (a, b) : A ร— B. As usual we have projections ฯ€โ‚ (๐‘Ž, ๐‘) := ๐‘Ž and ฯ€โ‚‚ (๐‘Ž, ๐‘) := ๐‘. Members of tuples may be given names as (a : A) ร— (b : B). In this case, supposing p : (a : A) ร— (b : B), we can write p.a and p.b instead of ฯ€โ‚ p and ฯ€โ‚‚ p. Similarly to above, we can have a dependent tuple or 'sigma type' (a : A) ร— (b : B(a)).

  • A + B is the discriminated union of A and B with constructors inl : A โ†’ A + B and inr : B โ†’ A + B.

2.2.2. Functors and monads

I will assume that the readers are already familiar with the motivation behind functors and monads in category theory and as used in e.g. Haskell but I will summarise them here for completeness. I refer the unfamiliar reader to the Haskell Typeclassopediahttps://wiki.haskell.org/Typeclassopedia.

Here, a functor will mean a type-valued function F : Type โ†’ Type equipped with a function mapper F (๐‘“ : A โ†’ B) : F A โ†’ F BSo here, the word 'functor' is used to mean the special case of category-theoretical endofunctors on the category of Types and functions between them.. I will here always assume that the functor is lawful, which here means it obeys the functor laws (2.6).

(2.6).

Laws for functors.

F (๐‘“ โˆ˜ ๐‘”) = (F ๐‘“) โˆ˜ (F ๐‘”)
F (๐‘ฅ โ†ฆ ๐‘ฅ) ๐‘ฆ = ๐‘ฆ

A natural function a : F โ‡’ G between functors F G : Type โ†’ Type is a family of functions a[A] : F A โ†’ G A indexed by A : Type such that a[B] โˆ˜ F f = G f โˆ˜ a[A] for all f : A โ†’ B. Often the type argument to a will be suppressed. It is quick to verify that the functors and natural functors over them form a category.

A monadFor learning about programming with monads, I recommend https://wiki.haskell.org/All_About_Monads. M : Type โ†’ Type is a functor equipped with two natural functions pure : ๐Ÿ™ โ‡’ M and join : M M โ‡’ M obeying the monad laws (2.7). Write ๐‘š >>= ๐‘“ := join (M ๐‘“ ๐‘š) for ๐‘š : M A and ๐‘“ : A โ†’ M B. do notationhttps://wiki.haskell.org/Keywords#do will be used in places.

(2.7).

Laws for monads.

join[X] โˆ˜ (M join[X]) = join[X] โˆ˜ (join[M X])
join[X] โˆ˜ (M pure[X]) = pure X
join[X] โˆ˜ (pure[M X]) = pure X

An applicative functor [MP08[MP08]McBride, Conor; Paterson, RossApplicative programming with effects (2008)J. Funct. Program. ยง2] M : Type โ†’ Type is equipped with pure : A โ†’ M A and seq : M (A โ†’ B) โ†’ M A โ†’ M B. Write ๐‘“ <*> ๐‘Ž := seq ๐‘“ ๐‘ฅ<*> is left associative: ๐‘ข <*> ๐‘ฃ <*> ๐‘ค = (๐‘ข <*> ๐‘ฃ) <*> ๐‘ค. and ๐‘Ž *> ๐‘ := seq (_ โ†ฆ ๐‘Ž) ๐‘. Applicative functors obey the laws given in (2.8).

(2.8).

Laws for applicative functors. I use the same laws as presented by McBride [MP08] but other equivalent sets are available.

(pure ๐Ÿ™) <*> ๐‘ข = ๐‘ข
(pure (โˆ˜)) <*> ๐‘ข <*> ๐‘ฃ <*> ๐‘ค = ๐‘ข <*> (๐‘ฃ <*> ๐‘ค)
(pure ๐‘“) <*> (pure ๐‘ฅ) = pure (๐‘“ ๐‘ฅ)
๐‘ข <*> pure ๐‘ฅ = pure (๐‘“ โ†ฆ ๐‘“ ๐‘ฅ) <*> ๐‘ข

2.2.3. Inductive datatypes

New inductive datatypes are defined with a GADT-like syntaxhttps://wiki.haskell.org/Generalised_algebraic_datatype (2.9).

(2.9).

Example inductive definition of List using a nil : List X and cons : X โ†’ List X โ†’ List X are the constructors.

List (X : Type) ::=
| nil
| cons (x : X) (l : List X)

In cases where it is obvious which constructor is being used the tag names will be suppressed. Function definitions with pattern matching will use the syntax given in (2.10).

(2.10).

Example of the definition of a function f using pattern matching. The inl and inr constructors are suppressed in the pattern. Provocative spacing is used instead to suggest which case is being matched on.

f : Bool + (X ร— Y) โ†’ โ„•
| true โ†ฆ 3
| false โ†ฆ 0
| (๐‘ฅ, ๐‘ฆ) โ†ฆ 2

One can express inductive datatypes D as fixpoints of functors D = Fix P where Fix P := P (Fix P). Depending on the underlying category, Fix P will not exist for all PSmyth and Plotkin are the first to place some conditions on when the fixpoint exists [SP82], see Adรกmek et al for a survey [AMM18].[SP82]Smyth, Michael B; Plotkin, Gordon DThe category-theoretic solution of recursive domain equations (1982)SIAM Journal on Computing, [AMM18]Adรกmek, Jiล™รญ; Milius, Stefan; et al.Fixed points of functors (2018)J. Log. Algebraic Methods Program..

When a D : Type may be written as Fix P for some PStrictly, we should also include an irreducibility condition: there is no Q such that P = Q โˆ˜ Q โˆ˜ ... โˆ˜ Q., P will be called the base functorhttps://hackage.haskell.org/package/recursion-schemes-5.2.2/docs/Data-Functor-Base.html for D. This conceptualisation is useful because we can use the base functor to make related types without needing to explicitly write down the constructors for the modified versions. For example we could make the list lazy with Lazy P X := Fix ((X โ†ฆ Unit โ†’ X) โˆ˜ P).

2.3. Inductive gadgets

For the rest of this thesis, I will make use of a few motifs for discussing inductive datastructures, particularly in Section 2.4, Chapter 3, Appendix A and Appendix C. In this section I will lay some background material for working with inductive datatypes. This is covered in various forms in the literature and within implementations of functional programming languages such as Haskellhttps://www.haskell.org/, although the presentation here differs from how I have seen it elsewhere. A framing of inductive datatypes that I have found very helpful but which I have not found an account of in the literature are the concept of 'coordinates' for datatypes (Section 2.3.2).

2.3.1. Traversable functors

Given a monad M, a common task is performing a monad-map with f : A โ†’ M B over a list of objects l : List X. This is done with the help of a function called mmap (2.11).

(2.11).

Definition of a 'monad map' for over lists for an applicative functor M : Type โ†’ Type and A B : Type.

mmap (๐‘“ : A โ†’ M B)
: List A โ†’ M (List B)
| [] โ†ฆ pure []
| (โ„Ž::๐‘™) โ†ฆ pure cons <*> ๐‘“ โ„Ž <*> mmap ๐‘“ ๐‘™

But we can generalise List to some functor T; when can we equip an analogous mmap to T? For example, in the case of binary trees (2.12).

(2.12).

Inductive definition of binary trees and a definition of mmap to compare with (2.11).

Tree A ::=
| leaf : Tree A
| branch : Tree A โ†’ A โ†’ Tree A โ†’ Tree A
mmap (๐‘“ : A โ†’ M B)
: Tree A โ†’ M (Tree B)
| leaf โ†ฆ pure leaf
| (branch ๐‘™ ๐‘Ž ๐‘Ÿ) โ†ฆ
pure branch <*> mmap ๐‘“ ๐‘™ <*> ๐‘“ ๐‘Ž <*> mmap ๐‘“ ๐‘Ÿ

A functor T : Type โ†’ Type is traversablehttps://wiki.haskell.org/Typeclassopedia#Traversable when for all applicative functors (Section 2.2.2) M : Type โ†’ Type, there is a natural map d[M] : (T โˆ˜ M) โ‡’ (M โˆ˜ T). That is, for each X : Type we have d[M][X] : T (M X) โ†’ M (T X). In addition to being natural, d must obey the traversal laws given in (2.13) [JR12[JR12]Jaskelioff, Mauro; Rypacek, OndrejAn Investigation of the Laws of Traversals (2012)Proceedings Fourth Workshop on Mathematically Structured Functional Programming, MSFP@ETAPS 2012, Tallinn, Estonia Definition 3.3].

(2.13).

Commutative diagrams for the traversal laws.

Given a traversable functor T and a monad M, we can recover mmap : (A โ†’ M B) โ†’ T A โ†’ M (T B) as mmap ๐‘“ ๐‘ก := d[M][B] (T ๐‘“ ๐‘ก).

2.3.2. Functors with coordinates

Bird et al [BGM+13[BGM+13]Bird, Richard; Gibbons, Jeremy; et al.Understanding idiomatic traversals backwards and forwards (2013)Proceedings of the 2013 ACM SIGPLAN symposium on Haskell] prove that (in the category of sets) the traversable functors are equivalent to a class of functors called finitary containers. Their theorem states that there is a type Shape T ๐‘› : TypeAn explicit definition of Shape T ๐‘› is the pullback of children[1] : T Unit โŸถ List Unit and !๐‘› : Unit โŸถ List Unit, the list with ๐‘› elements. for each traversable T and ๐‘› : โ„• such that that each ๐‘ก : T X is isomorphic an object called a finitary container on Shape T shown in (2.14).

(2.14).

A finitary container is a count ๐‘›, a shape ๐‘  : Shape T length and a vector children. Vec length X is the type of lists in X with length length.

T X โ‰…
(length : โ„•)
ร— (shape : Shape T length)
ร— (children : Vec length X)

map and traverse may be defined for the finitary container as map and traverse over the children vector. Since ๐‘ก : T X has ๐‘ก.length child elements, they can be indexed by the numbers {๐‘˜ : โ„• | ๐‘˜ < length}. We can then define operations to get and set individual elements according to this index ๐‘˜.

Usually, however, this numerical indexing of the children of ๐‘ก : T X loses the semantics of the datatype. As an example; consider the case of a binary tree Tree in (2.15). A tree ๐‘ก : Tree X with ๐‘› branch components will have length ๐‘› and a corresponding children : Vec ๐‘› X, but indexing via numerical indices {๐‘˜ | ๐‘˜ < ๐‘›} loses information about where the particular child ๐‘ฅ : X can be found in the tree.

(2.15).

Definition of binary trees using a base functor. Compare with the definition (2.12).

TreeBase A X ::=
| leaf : TreeBase X
| branch : TreeBase X โ†’ A โ†’ TreeBase X โ†’ TreeBase X
Tree A := Fix (TreeBase A)

Now I will introduce a new way of indexing the members of children for the purpose of reasoning about inductive datatypes. This idea has been used and noted before many times, the main one being the idea of paths in universal algebra [BN98[BN98]Baader, Franz; Nipkow, TobiasTerm rewriting and all that (1998) Dfn. 3.1.3]. However I have not seen an explicit account of this idea in the general setting of traversable functors and later in Section 2.3.3 to general inductive datatypes.

A traversable functor T has coordinates when equipped with a type C : Type and a function coords[๐‘›] : Shape T ๐‘› โ†’ Vec ๐‘› C. The coords function amounts to a labelling of the ๐‘› children of a particular shape with members of C.

Often when using traversals, working with the children list Vec (length ๐‘ก) X for each shape of T can become unwieldy, it is convenient to instead explicitly provide a pair of functions get and set (2.16) for manipulating particular children of a given ๐‘ก : T X.

(2.16).

Getter and setter signatures and equations. Here ๐‘™[๐‘–] is the ๐‘–th member of ๐‘™ : List X and Vec.set : (๐‘– : ๐‘›) โ†’ Vec ๐‘› X โ†’ X โ†’ Vec ๐‘› X replaces the ๐‘–th member of the given vector with the given X item.

get : C โ†’ T X โ†’ Option X
set : C โ†’ T X โ†’ X โ†’ T X
get ๐‘ ๐‘ก = if โˆƒ ๐‘–, (coords ๐‘ก)[๐‘–] = ๐‘
then some ๐‘ก.children[๐‘–]
else none
set ๐‘ ๐‘ก ๐‘ฅ = if โˆƒ ๐‘–, (coords ๐‘ก)[๐‘–] = ๐‘
then Vec.set ๐‘– ๐‘ก.children ๐‘ฅ
else ๐‘ก

C is not unique, and in general should be chosen to have some semantic value for thinking about the structure of T. Here are some examples of functors with coordinates:

  • List has coordinates โ„•. coords ๐‘™ for ๐‘™ : List X returns a list [0, โ‹ฏ, ๐‘™.length - 1]. get ๐‘– ๐‘™ is some ๐‘™[๐‘–] and set ๐‘– ๐‘™ ๐‘ฅ returns a new list with the ๐‘–th element set to be ๐‘ฅ.

  • Vec n, lists of length n, has coordinates {k : โ„• | k < n} with the same methods as for List above.

  • Option has coordinates Unit. coords (some ๐‘ฅ) := [()] and coords none := []. get _ ๐‘œ := ๐‘œ and set replaces the value of the option.

  • Binary trees have coordinates List D as shown in (2.17).

(2.17).

Defining the List Bool coordinates for binary trees. Here the left/right items in the C = List D can be interpreted as a sequence of "take the left/right branch" instructions. set is omitted for brevity but follows a similar patter to get.

D ::= | left | right
coords
: Tree X โ†’ List (List D)
| leaf โ†ฆ []
| branch ๐‘™ ๐‘ฅ ๐‘Ÿ โ†ฆ
[ ..[[left, ..๐‘] for ๐‘ in coords ๐‘™]
, []
, ..[[right , ..๐‘] for ๐‘ in coords ๐‘Ÿ]
]
get : List (List Bool) โ†’ Tree X โ†’ Option X
| _ โ†ฆ leaf โ†ฆ none
| [] โ†ฆ branch ๐‘™ ๐‘ฅ ๐‘Ÿ โ†ฆ some ๐‘ฅ
| [left, ..๐‘] โ†ฆ branch ๐‘™ ๐‘ฅ ๐‘Ÿ โ†ฆ get ๐‘ ๐‘™
| [right , ..๐‘] โ†ฆ branch ๐‘™ ๐‘ฅ ๐‘Ÿ โ†ฆ get ๐‘ ๐‘Ÿ

2.3.3. Coordinates on initial algebras of traversable functors

Given a functor F with coordinates C, we can induce coordinates on the free monad Free F : Type โ†’ Type of F. The free monad is defined concretely in (2.18).

(2.18).

Definition of a free monad Free F X and join for a functor F : Type โ†’ Type and X : Type.

Free F X ::=
| pure : X โ†’ Free F X
| make : F(Free F X) โ†’ Free F X
join : (Free F (Free F X)) โ†’ Free F X
| pure ๐‘ฅ โ†ฆ pure ๐‘ฅ
| (make ๐‘“) โ†ฆ make (F join ๐‘“)

We can write Free F X as the fixpoint of A โ†ฆ X + F AAs mentioned in Section 2.2.3, these fixpoints may not exist, but I will assume that they do here for the Fs of concern.. Free F has coordinates List C with methods defined in (2.19).

(2.19).

Definitions of the coordinate methods for Free F given F has coordinates C. Compare with the concrete binary tree definitions (2.17).

coords : Free F X โ†’ List (List C)
| pure ๐‘ฅ โ†ฆ []
| make ๐‘“ โ†ฆ
[ [๐‘, ..๐‘Ž]
for ๐‘Ž in coords (get ๐‘ ๐‘“)
for ๐‘ in coords ๐‘“]
get : List C โ†’ Free F X โ†’ Option X
| [] โ†ฆ pure ๐‘ฅ โ†ฆ some ๐‘ฅ
| [๐‘, ..๐‘Ž] โ†ฆ make ๐‘“ โ†ฆ (get ๐‘ ๐‘“) >>= get ๐‘Ž
| _ โ†ฆ _ โ†ฆ none
set : List C โ†’ Free F X โ†’ X โ†’ Free F X
| [] โ†ฆ pure _ โ†ฆ ๐‘ฅ โ†ฆ pure ๐‘ฅ
| [๐‘, ..๐‘Ž] โ†ฆ make ๐‘“ โ†ฆ ๐‘ฅ โ†ฆ (set ๐‘ ๐‘“)
| _ โ†ฆ _ โ†ฆ none

In a similar manner, List C can be used to reference particular subtrees of an inductive datatype D which is the fixpoint of a traversable functor D = F D. Let F have coordinates C. D here is not a functor, but we can similarly define coords : D โ†’ List (List C), get : List C โ†’ Option D and set : List C โ†’ D โ†’ D โ†’ D.

The advantage of using coordinates over some other system such as opticshttp://hackage.haskell.org/package/lens [FGM+07[FGM+07]Foster, J Nathan; Greenwald, Michael B; et al.Combinators for bidirectional tree transformations: A linguistic approach to the view-update problem (2007)ACM Transactions on Programming Languages and Systems (TOPLAS)] or other apparati for working with datatypes [LP03[LP03]Lรคmmel, Ralf; Peyton Jones, SimonScrap Your Boilerplate (2003)Programming Languages and Systems, First Asian Symposium, APLAS 2003, Beijing, China, November 27-29, 2003, Proceedings] is that they are much simpler to reason about. A coordinate is just an address of a particular subtree. Another advantage is that the choice of C can convey some semantics on what the coordinate is referencing (for example, C = left | right in (2.17)), which can be lost in other ways of manipulating datastructures.

2.4. Metavariables

Now with a way of talking about logical foundations, we can resume from Section 2.1.2 and consider the problem of how to represent partially constructed terms and proofs given a foundation. This is the purpose of a development calculus: to take some logical system and produce some new system such that one can incrementally build terms and proofs in a way that provides feedback at intermediate points and ensures that various judgements hold for these intermediate terms. In Chapter 3, I will create a new development calculus for building human-like proofs, and in Appendix A this system will be connected to Lean. we need to first look at how Lean's current development calculus behaves. Since I will be using Lean 3 in this thesis and performing various operations over its expressions, I will follow the same general setup as is used in Lean 3. The design presented here was first developed by Spiwack [Spi11[Spi11]Spiwack, ArnaudVerified computing in homological algebra, a journey exploring the power and limits of dependent type theory (2011)] first released in Coq 8.5. It was built to allow for a type-safe treatment of creating tactics with metavariables in a dependently-typed foundation.

2.4.1. Expressions and types

In this section I will introduce the expression foundation language that will be used for the remainder of the thesis. The system presented here is typical of expression structures found in DTT-based provers such as Lean 3 and Coq. I will not go in to detail on induction schema and other advanced features because the work in this thesis is independent of them.

(2.20).

Definition of a base functor for pure DTT expressions as used by Lean.

ExprBase X ::=
| lambda : Binder โ†’ X โ†’ ExprBase X -- function abstraction
| pi : Binder โ†’ X โ†’ ExprBase X -- dependent function type
| var : Name โ†’ ExprBase X -- variables
| const : Name โ†’ ExprBase X -- constants
| app : X โ†’ X โ†’ ExprBase X -- function application
| sort : Level โ†’ ExprBase X -- type universe
Binder := (name : Name) ร— (type : Expr)
Context := List Binder
Expr := Fix ExprBase

Here, Level can be thought of as expressions over some signature that evaluate to natural numbers, they are used to stratify Lean's types so that one can avoid Girard's paradox [Hur95[Hur95]Hurkens, Antonius JCA simplification of Girard's paradox (1995)International Conference on Typed Lambda Calculi and Applications]. Name is just a type of easily distinguishable identifiers, in the case of Lean they are lists of strings or numbers. Sugar lambda ๐‘ฅ ฮฑ ๐‘ as ฮป (๐‘ฅ โˆถ ฮฑ), ๐‘, pi ๐‘ฅ ฮฑ ๐‘ as ฮ  (๐‘ฅ โˆถ ฮฑ), ๐‘, app ๐‘“ ๐‘Ž as ๐‘“ ๐‘Ž and omit var and const when it is clear what the appropriate constructor is.

Using ExprBase, we may define pure expressions Expr := Fix ExprBase as in Section 2.2.3. Note that it is important to distinguish between the meta-level type system introduced in Section 2.2 and the object-level type system where the 'types' are merely instances of ExprTo help with this distinction, I have made an effort to annotated any object-level type assignment statements such as (๐‘ฅ โˆถ ฮฑ) with a variant of the colon โˆถ as opposed to :..

Variables may be bound by ฮป and ฮ  expressions. For example, in ฮป (๐‘ฅ โˆถ ฮฑ), ๐‘ก, we say that the expression binds ๐‘ฅ in ๐‘ก. If ๐‘ก contains variables that are not bound, these are called free variables. Now given a partial map ฯƒ : Name โ‡€ Expr and a term ๐‘ก : Expr, define a substitution subst ฯƒ ๐‘ก : Expr as in (2.21). This will be written as ฯƒ ๐‘ก for brevity.

(2.21).

Definition of substitution on an expression. Here, ExprBase (subst ฯƒ) ๐‘’ is mapping each child expression of ๐‘’ with subst ฯƒ; see Section 2.2.3.

subst ฯƒ : Expr โ†’ Expr
| var ๐‘ฅ โ†ฆ if ๐‘ฅ โˆˆ dom ฯƒ then ฯƒ ๐‘ฅ else ๐‘ฅ
| ๐‘’ โ†ฆ ExprBase (subst ฯƒ) ๐‘’

I will denote substitutions as a list of Name โ†ฆ Expr pairs. For example โฆƒ๐‘ฅ โ†ฆ ๐‘ก, ๐‘ฆ โ†ฆ ๐‘ โฆ„ where ๐‘ฅ ๐‘ฆ : Name are the variables which will be substituted for terms ๐‘ก ๐‘  : Expr respectively.

Substitution can easily lead to badly-formed expressions if there are variable naming clashes. I need only note here that we can always perform a renaming of variables in a given expression to avoid clashes upon substitution. These clashes are usually avoided within prover implementations with the help of de-Bruijn indexing [deB72[deB72]de Bruijn, Nicolaas GovertLambda calculus notation with nameless dummies, a tool for automatic formula manipulation, with application to the Church-Rosser theorem (1972)Indagationes Mathematicae (Proceedings)].

2.4.2. Assignable datatypes

Given an expression structure Expr and ๐‘ก : Expr, we can define a traversal over all of the immediate subexpressions of ๐‘ก. However, in order to perform variable-oriented operations such as abstraction and substitution, we need to take care when traversing a subexpression that lies under a binder. For example for lambda binders ๐‘ก = ฮป (๐‘ฅ : ฮฑ), ๐‘, the subexpression ๐‘ก has a free variable ๐‘ฅ.

(2.22).

Illustrative code for mapping the immediate subexpressions of an expression using child_traverse. This is different from a normal traversal of a datatructure because the mapping function ๐‘“ is also passed a context ฮ“ indicating the current variable context of the subexpression. Thus when exploring a ฮป-binder, ๐‘“ can take in to account the modified context.

child_traverse (M : Monad) (๐‘“ : Context โ†’ Expr โ†’ M Expr)
: Context โ†’ Expr โ†’ M Expr
| ฮ“ โ†ฆ (Expr.var ๐‘›) โ†ฆ (Expr.var ๐‘›)
| ฮ“ โ†ฆ (Expr.app ๐‘™ ๐‘Ÿ) โ†ฆ
pure (Expr.app) <*> ๐‘“ ฮ“ ๐‘™ <*> ๐‘“ ฮ“ ๐‘Ÿ
| ฮ“ โ†ฆ (Expr.lambda ๐‘› ฮฑ ๐‘) โ†ฆ
pure (Expr.lambda ๐‘›) <*> ๐‘“ ฮ“ ฮฑ <*> ๐‘“ [..ฮ“, (๐‘›:ฮฑ)] ๐‘

Once you have this child-traversal function, one can derive all of one's favourite context-aware expression manipulating tools:

(2.23).

Some example implementations of expression manipulating tools with the child_traverse construct. The monad structure on Set is pure := ๐‘ฅ โ†ฆ {๐‘ฅ} and join (๐‘  : Set Set X) := โ‹ƒ ๐‘  and map ๐‘“ ๐‘  := ๐‘“[๐‘ ]. fv stands for 'free variables'.

instantiate : Name โ†’ Expr โ†’ Context โ†’ Expr โ†’ Expr
| ๐‘ฅ โ†ฆ ๐‘Ÿ โ†ฆ ฮ“ โ†ฆ (Expr.var ๐‘›) โ†ฆ if (๐‘ฅ = ๐‘›) then ๐‘Ÿ else Expr.var ๐‘›
| ๐‘ฅ โ†ฆ ๐‘Ÿ โ†ฆ ฮ“ โ†ฆ ๐‘ก โ†ฆ child_traverse ๐Ÿ™ (instantiate ๐‘ฅ ๐‘Ÿ) ฮ“ ๐‘ก
fv : Context โ†’ Expr โ†’ Set Name
| ฮ“ โ†ฆ (Expr.var ๐‘›) โ†ฆ if ๐‘› โˆˆ ฮ“ then โˆ… else {๐‘›}
| ฮ“ โ†ฆ ๐‘ก โ†ฆ child_traverse Set (fv) ฮ“ ๐‘ก

The idea here is to generalise child_traverse to include any datatype that may involve expressions. Frequently when building systems for proving, one has to make custom datastructures. For example one might wish to create a 'rewrite-rule' structure for modelling equational reasoning (as will be done in Chapter 4):

(2.24).

Simple RewriteRule representation defined as a pair of Exprs, representing lhs = rhs.

RewriteRule := (lhs : Expr) ร— (rhs : Expr)

But now if we want to perform a variable instantiation or count the number of free variables present in ๐‘Ÿ : RewriteRule, we have to write custom definitions to do this. The usual traversal functions from Section 2.3.1 are not adequate here, because we may need to take in to account a binder structure. For example traversing Context as a simple list of names and expressions will produce the wrong output for fv, because some of the variables are bound by previous binders in the context.

To avoid having to write all of this boilerplate, let's make a typeclass assignable (2.25) on datatypes that we need to manipulate the expressions in. Say that

(2.25).

Say that a type X is assignable by equipping X with the given expr_traverse operation.

class assignable (X : Type) :=
(expr_traverse : (M : Monad) โ†’ (Context โ†’ Expr โ†’ M Expr) โ†’ Context โ†’ X โ†’ M X)
expr_traverse M ๐‘“
: Context โ†’ RewriteRule โ†’ RewriteRule
| ฮ“ โ†ฆ (๐‘™, ๐‘Ÿ) โ†ฆ
pure (๐‘™ โ†ฆ ๐‘Ÿ โ†ฆ (๐‘™,๐‘Ÿ)) <*> ๐‘“ ฮ“ ๐‘™ <*> ๐‘“ ฮ“ ๐‘Ÿ

Now, provided expr_traverse is defined for X, fv, instantiate and other expression-manipulating operations such as those in (2.23) can be modified to use expr_traverse instead of child_traverse. This assignable regime becomes very useful using de-Bruijn indices to represent bound variables [deB72[deB72]de Bruijn, Nicolaas GovertLambda calculus notation with nameless dummies, a tool for automatic formula manipulation, with application to the Church-Rosser theorem (1972)Indagationes Mathematicae (Proceedings)] because the length of ฮ“ can be used to determine the binder depth of the current expression. An example implementation including many more examples of implementations of assignable and expression-manipulating operations that can make use of assignable can be found in the my Lean implementation of this concepthttps://github.com/leanprover-community/mathlib/pull/5719.

2.4.3. Lean's development calculus

In the Lean source code, there are other constructors for Expr other than those in (2.21). Some are for convenience or efficiency reasons (such as Lean 3 macros), but others are part of the Lean development calculus. The main development calculus construction is mvar or a metavariable, sometimes also called existential variables or schematic variables. An mvar ?m acts as a 'hole' for an expression to be placed in later. There is no kernel machinery to guarantee that an expression containing a metavariable is correct, instead they are used for the process of building expressions.

As an example, suppose that we needed to prove P โˆง Q for some propositions P Q โˆถ Prop. The metavariable-based approach to proving this would be to declare a new metavariable ?๐‘ก : P โˆง Q. Then, a program might try to solve this metavariable in two steps; declare two new metavariables ?๐‘กโ‚ : P and ?๐‘กโ‚‚ : Q; and then assign ?๐‘ก with the expression and.make ?๐‘กโ‚ ?๐‘กโ‚‚ where and.make : P โ†’ Q โ†’ P โˆง Q creates 'and' proofs. After this, ?๐‘กโ‚ and ?๐‘กโ‚‚ may themselves be assigned with p : P and q : Q. In this way, the proof term can be built up slowly as ?๐‘ก โŸฟ and.make ?๐‘กโ‚ ?๐‘กโ‚‚ โŸฟ and.make p ?๐‘กโ‚‚ โŸฟ and.make p q. This process is much more convenient for building modular programs that construct proofs than requiring that a pure proof term be made all in one go.

Lean comes with a development calculus that uses metavariables. This section can be viewed as a more detailed version of the account originally given by de Moura et al [MAKR15[MAKR15]de Moura, Leonardo; Avigad, Jeremy; et al.Elaboration in Dependent Type Theory (2015)CoRR ยง3.2] with the additional details sourced from inspecting the Lean source codehttps://github.com/leanprover-community/lean. The first use of this kind of development calculus was in of Spiwack's thesis [Spi11], where the tactic monad for Coq was augmented with a stateful global metavariable context.

The implementation of Lean allows another constructor for Expr to allow for metavariables:

(2.26).

Redefining Expr with metavariables using the base functor given in (2.20).

Expr ::=
| ExprBase Expr
| ?Name

Metavariables are 'expression holes' and are denoted as ?๐‘ฅ where ๐‘ฅ : Name. They are placeholders into which we promise to substitute a valid pure expression later. Similarly to fv(๐‘ก) being the free variables in ๐‘ก : Expr, we can define mv(๐‘ก) to be the set of metavariables present in ๐‘ก. However we still need to be able to typecheck and reduce expressions involving metavariables and so we need to have some additional structure on the context.

The idea is that in addition to a local context ฮ“, expressions are inspected and created within the scope of a second context called the metavariable context ๐‘€ : MvarContext. The metavariable context is a dictionary MvarContext := Name โ‡€ MvarDecl where each metavariable declaration ๐‘‘ : MvarDecl has the following information:

  • identifier : Name A unique identifier for the metavariable.

  • type : Expr The type of the metavariable.

  • context : Context The local context of the metavariable. This determines the set of local variables that the metavariable is allowed to depend on.

  • assignment : Option Expr An optional assignment expression. If assignment is not none, say that the metavariable is assigned.

The metavariable context can be used to typecheck an expression containing metavariables by assigning each occurrence ?๐‘ฅ with the type given by the corresponding declaration ๐‘€[๐‘ฅ].type in ๐‘€. The assignment field of MvarDecl is used to perform instantiation. We can interpret ๐‘€ as a substitution.

As mentioned in Section 2.1.2, the purpose of the development calculus is to represent a partially constructed proof or term. The kernel does not need to check expressions in the development calculus (which here means expressions containing metavariables), so there is no need to ensure that an expression using metavariables is sound in the sense of Section 2.1.3 for some set of inference rules such as those given in (2.5). However, in Section A.1, I will provide some inference rules for typing expressions containing metavariables to assist in showing that the system introduced in Chapter 3 is compatible with Lean.

2.4.4. Tactics

A partially constructed proof or term in Lean is represented as a TacticState object. For our purposes, this can be considered as holding the following data:

(2.27).
TacticState :=
(result : Expr)
ร— (mctx : MvarContext)
ร— (goals : List Expr)
Tactic (A : Type) := TacticState โ†’ Option (TacticState ร— A)

The result field is the final expression that will be returned when the tactic completes. goals is a list of metavariables that are used to denote what the tactic state is currently 'focussing on'. Both goals and result are in the context of mctx.

Tactics may perform actions such as modifying the goals or performing assignments of metavariables. In this way, a user may interactively build a proof object by issuing a stream of tactics.

2.5. User interfaces for provers

Now let's talk about something completely different: user interfaces. One research question in Section 1.2 is to investigate how human-like reasoning can be enabled through the use of interactive graphical user interfaces (GUIs). The field of ITP has a rich history of using rich graphical user interfaces to represent and interact with proofs and expressions. Here I will provide a brief review of these. The background covered in this section will then be picked up for Chapter 5, where I introduce my own GUI framework for ITP.

An early general user interface for interactive proving was Aspinall's Proof General [Asp00[Asp00]Aspinall, DavidProof General: A generic tool for proof development (2000)International Conference on Tools and Algorithms for the Construction and Analysis of Systems, ALW07[ALW07]Aspinall, David; Lรผth, Christoph; et al.A framework for interactive proof (2007)Towards Mechanized Mathematical Assistants]. This took the form of an Emacshttps://www.gnu.org/software/emacs/ extension that offered a general purpose APIApplication programming interface. A set of protocols to allow two applications to communicate with each other. for controlling proof assistants such as Isabelle. A typical Proof General session would make use of two text buffers: the proof script buffer and the goal state buffer. Users type commands in to the script buffer, and observe changes in the goal state buffer. This two-panel setup remains the predominant workflow for proof assistants today. Proof General also offers the ability to perform interaction with the goal state, for example 'proof-by-pointing' with subexpressions in the output window.

The idea proof-by-pointing will play a key role in Section 5.4. It was first described by Bertot and Thรฉry [BT98[BT98]Bertot, Yves; Thรฉry, LaurentA generic approach to building user interfaces for theorem provers (1998)Journal of Symbolic Computation]. The idea of proof-by-pointing is to preserve the semantics of pretty-printedA pretty-printed expression is a string of characters that represents an expression in the underlying foundation of a prover. For example the string ๐‘ฅ + ๐‘ฆ is the pretty printed form of the expression app (app (const "plus") (var "๐‘ฅ")) (var "๐‘ฆ"). expressions so that a user may inspect the tree structure of the expression through pointing to different parts of the string.

The most advanced specially-created IDEIntegrated Development Environment for proving is Isabelle's Prover IDE (PIDE) [Wen12[Wen12]Wenzel, MakariusIsabelle/jEdit-A Prover IDE within the PIDE Framework. (2012)AISC/MKM/Calculemus], developed primarily by Makarius Wenzel in Scalahttps://www.scala-lang.org/ and based on the JEdit text editorhttp://jedit.org. PIDE richly annotates Isabelle documents and proof states to provide inline documentation; interactive and customisable commands; and automatic insertion of text among other features. PIDE uses a Java GUI library called Swinghttps://docs.oracle.com/javase/8/docs/technotes/guides/swing/. Isabelle's development environment allows users to code their own GUI in Scala. There have been some recent efforts to support VSCodehttps://isabelle.in.tum.de/repos/isabelle/file/tip/src/Tools/VSCode as a client editor for Isabelle files. A web-based client for Isabelle, called Clide [LR13[LR13]Lรผth, Christoph; Ring, MartinA web interface for Isabelle: The next generation (2013)International Conference on Intelligent Computer Mathematics] was developed, although it provided only a subset of the functionality of the JEdit version.

SerAPI [Gal16[Gal16]Gallego Arias, Emilio JesรบsSerAPI: Machine-Friendly, Data-Centric Serialization for Coq (2016)] is a library for machine-machine interaction with the Coq theorem prover. The project supports some web-based IDE projects such as jsCoq [GPJ17[GPJ17]Gallego Arias, Emilio Jesรบs; Pin, Benoรฎt; et al.jsCoq: Towards Hybrid Theorem Proving Interfaces (2017)Proceedings of the 12th Workshop on User Interfaces for Theorem Provers] and PeaCoqhttp://goto.ucsd.edu/peacoq/. Very recently, a framework called Alectyron [Pit20[Pit20]Pit-Claudel, ClรฉmentUntangling mechanized proofs (2020)SLE 2020: Proceedings of the 13th ACM SIGPLAN International Conference on Software Language Engineering] has been released for Coq that enables users to embed web-based representations of data (see the link for more details).

There are some older GUI-centric theorem provers that have fallen out of use: LฮฉUI [SHB+99[SHB+99]Siekmann, Jรถrg; Hess, Stephan; et al.LOUI: Lovely OMEGA user interface (1999)Formal Aspects of Computing], HyperProof [BE92[BE92]Barwise, Jon; Etchemendy, JohnHyperproof: Logical reasoning with diagrams (1992)Working Notes of the AAAI Spring Symposium on Reasoning with Diagrammatic Representations] and XBarnacle [LD97[LD97]Lowe, Helen; Duncan, DavidXBarnacle: Making theorem provers more accessible (1997)Automated Deductionโ€”CADE-14]. These tools were all highly innovative for including graphical and multimodal representations of proofs, however the code for these seems to have been lost, paywalled or succumbed to bit rothttps://en.wikipedia.org/wiki/Software_rot, to the extent that I can only view them through the screenshots that they included with the papers. Source code for ฮฉmega and CLAM (which LฮฉUI and XBarnacle use respectively) can be found in the Theorem Prover Museumhttps://theoremprover-museum.github.io/.

Another contemporary proof assistants with specially made GUIs are Theorema [BJK+16[BJK+16]Buchberger, Bruno; Jebelean, Tudor; et al.Theorema 2.0: computer-assisted natural-style mathematics (2016)Journal of Formalized Reasoning] and KeY [ABB+16[ABB+16]Ahrendt, Wolfgang; Beckert, Bernhard; et al.Deductive Software Verification - The KeY Book (2016)]. Theorema is built upon the computer algebra system Wolfram Mathematicahttps://www.wolfram.com/mathematica/ and makes use of its inbuilt @abbr[GUI] framework. KeY is a theorem prover for verifying Java applications. KeY embraces multimodal views of proofs and offers numerous interactive proof discovery features and interactive proof-by-pointing inspection of subexpressions. In her thesis, Grebing investigates the usability of KeY [Gre19[Gre19]Grebing, Sarah CaeciliaUser Interaction in Deductive Interactive Program Verification (2019)] through the use of focus groups, an approach relevant for my evaluation study in Chapter 6.

Another source of inspiration for me are the theorem prover web-apps: Vicary's Globular [VKB18[VKB18]Vicary, Jamie; Kissinger, Aleks; et al.Globular: an online proof assistant for higher-dimensional rewriting (2018)Logical Methods in Computer Science] and Breitner's Incredible Proof Machine [Bre16[Bre16]Breitner, JoachimVisual theorem proving with the Incredible Proof Machine (2016)International Conference on Interactive Theorem Proving]. These tools are natively web-based and offer a visual representation of the proof state for users to manipulate.

These tools all demonstrate an ongoing commitment by the ITP community to produce graphical user interfaces which explore new ways of respresenting and interacting with proof assistants. It is with these previous works in mind that I design a new kind of general purpose approach to a GUI framework for a prover.

2.6. Understandability and confidence

This section is a short survey of literature on what it means for a mathematical proof to be understandable. This is used in Chapter 6 to evaluate my software and to motivate the design of the software in Chapter 3 and Chapter 4. At the end of this section, I hope readers will have gained a sense of what people have already done in this field and to feel that the question is a little less nebulous.

2.6.1. Understandability of mathematics in a broader context

What does it mean for a proof to be understandable? An early answer to this question comes from the 19th century philosopher Spinoza. Spinoza [Spi87[Spi87]Spinoza, BenedictThe chief works of Benedict de Spinoza (1887)] supposes 'four levels' of a student's understanding of a given mathematical principle or rule, which are:

  1. mechanical: The student has learnt a recipe to solve the problem, but no more than that.

  2. inductive: The student has verified the correctness of the rule in a few concrete cases.

  3. rational: The student comprehends a proof of the rule and so can see why it is true generally.

  4. intuitive: The student is so familiar and immersed in the rule that they cannot comprehend it not being true.

This is a good place to start; for the purposes of this thesis I will restrict my attention to type 3 understanding. That is, how the student digests a proof of a general result. If the student is at level 4, and treats the result like a fish treats water, then there seems to be little an ITP system can offer other than perhaps forcing any surprising counterexamples to arise when the student attempts to formalise it.

Edwina Michener's Understanding Understanding Mathematics [Mic78[Mic78]Michener, Edwina RisslandUnderstanding understanding mathematics (1978)Cognitive science] provides a wide ontology of methods for understanding mathematics in a similar fashion to Pรณlya. Michener (p. 373) proposes that "understanding is a complementary process to problem solving" and incorporates Spinoza's 4-level model. She also references Poincarรฉ's thoughts on understanding [Poi14[Poi14]Poincarรฉ, HenriScience and method (1914) p. 118], from which I will take an extended quote from the original:

What is understanding? Has the word the same meaning for everybody? Does understanding the demonstration of a theorem consist in examining each of the syllogisms of which it is composed and being convinced that it is correct and conforms to the rules of the game? ...

Yes, for some it is; when they have arrived at the conviction, they will say, I understand. But not for the majority... They want to know not only whether the syllogisms are correct, but why there are linked together in one order rather than in another. As long as they appear to them engendered by caprice, and not by an intelligence constantly conscious of the end to be attained, they do not think they have understood.

In a similar spirit; de Millo, Lipton and Perlis [MUP80[MUP80]de Millo, Richard A; Upton, Richard J; et al.Social processes and proofs of theorems and programs (1980)The mathematical intelligencer] write referring directly to the nascent field of program verification (here referred to 'proofs of software')

Mathematical proofs increase our confidence in the truth of mathematical statements only after they have been subjected to the social mechanisms of the mathematical community. These same mechanisms doom the so-called proofs of software, the long formal verifications that correspond, not to the working mathematical proof, but to the imaginary logical structure that the mathematician conjures up to describe his feeling of belief. Verifications are not messages; a person who ran out into the hall to communicate his latest verification would rapidly find himself a social pariah. Verifications cannot really be read; a reader can flay himself through one of the shorter ones by dint of heroic effort, but that's not reading. Being unreadable and - literally - unspeakable, verifications cannot be internalized, transformed, generalized, used, connected to other disciplines, and eventually incorporated into a community consciousness. They cannot acquire credibility gradually, as a mathematical theorem does; one either believes them blindly, as a pure act of faith, or not at all.

Poincarรฉ's concern is that a verified proof is not sufficient for understanding. De Millo et al question whether a verified proof is a proof at all! Even if a result has been technically proven, mathematicians care about the structure and ideas behind the proof itself. If this were not the case, then it would be difficult to explain why new proofs of known results are valued by mathematicians. The question of what exactly they value is what I wish to explore further in the study in Chapter 6.

Many studies investigating mathematical understanding within an educational context exist, see the work of Sierpinska [Sie90[Sie90]Sierpinska, AnnaSome remarks on understanding in mathematics (1990)For the learning of mathematics, Sie94[Sie94]Sierpinska, AnnaUnderstanding in mathematics (1994)] for a summary. See also Pรณlya's manual on the same topic [Pรณl62[Pรณl62]Pรณlya, GeorgeMathematical Discovery (1962)].

2.6.2. Confidence

Another line of inquiry suggested by Poincarรฉ's quote is distinguishing confidence in a proof from a proof being understandable. By confidence in a proof, I do not mean confidence in the result being true, but instead confidence in the given script actually being a valid proof of the result.

Figure 2.28.

A cartoon illustrating a component of the proof of the Jordan curve theorem for polygons as described by Hales [Hal07]. Call the edge of the purple polygon , then the claim that this cartoon illustrates is that given any disk in green and for any point not on , we can 'walk along a simple polygonal arc' (here in green) to the disk .

As an illustrative example, I will give my own impressions on some proofs of the Jordan curve theoremhttps://en.wikipedia.org/wiki/Jordan_curve_theorem which states that any non-intersecting continuous loop in the 2D Euclidean plane has an interior region and an exterior region. Formal and informal proofs of this theorem are discussed by Hales [Hal07[Hal07]Hales, Thomas CThe Jordan curve theorem, formally and informally (2007)The American Mathematical Monthly]. I am confident that the proof of the Jordan curve theorem formalised by Hales in the HOL Light proof assistant is correct although I can't claim to understand it in full. Contrast this with the diagrammatic proof sketch (Figure 2.28) given in Hales' paper (originating with Thomassen [Tho92[Tho92]Thomassen, CarstenThe Jordan-Schรถnflies theorem and the classification of surfaces (1992)The American Mathematical Monthly]). This sketch is more understandable to me but I am less confident in it being a correct proof (e.g., maybe there is some curious fractal curve that causes the diagrammatic proofs to stop being obvious...). In the special case of the curve being a polygon, the proof involves "walking along a simple polygonal arc (close to but not intersecting )" and Hales notes:

Nobody doubts the correctness of this argument. Every mathematician knows how to walk without running in to walls. Detailed figures indicating how to "walk along a simple polygonal arc" would be superfluous, if not downright insulting. Yet, it is quote another matter altogether to train a computer to run around a maze-like polygon without collisions...

These observations demonstrate how one's confidence in a mathematical result is not merely a formal affair but includes ostensibly informal arguments of correctness. This corobborates the attitude taken be De Millo et al in Section 2.6.1. Additionally, as noted in Section 1.1, confidence in results also includes a social component: a mathematician will be more confident that a result is correct if that result is well established within the field.

There has also been some empirical work on the question of confidence in proofs. Inglis and Alcock [IA12[IA12]Inglis, Matthew; Alcock, LaraExpert and novice approaches to reading mathematical proofs (2012)Journal for Research in Mathematics Education] performed an empirical study on eye movements in undergrads vs postgrads. A set of undergraduates and post-graduate researchers were presented with a set of natural language proofs and then asked to judge the validity of these proofs. The main outcomes they suggest from their work are that mathematicians can disagree about the validity of even short proofs and that post-graduates read proofs in a different way to undergraduates: moving their focus back and forth more. This suggests that we might expect undergraduates and postgraduates to present different reasons for their confidence in the questions.

2.6.3. Understandability and confidence within automated theorem proving.

The concepts of understandability and confidence have also been studied empirically within the context of proof assistants. This will be picked up in Chapter 6.

Stenning et al. [SCO95[SCO95]Stenning, Keith; Cox, Richard; et al.Contrasting the cognitive effects of graphical and sentential logic teaching: reasoning, representation and individual differences (1995)Language and Cognitive Processes] used the graphical Hyperproof software (also discussed in Section 2.5) to compare graphical and sentence-based representations in the teaching of logic. They found that both representations had similar transferabilityThat is, do lessons learnt in one domain transfer to anologous problems in other domains? The psychological literature identifies this as a difficult problem in teaching. and that the best teaching representation (in terms of test scores) was largely dependent on the individual differences between the students. This suggests that in looking for what it means for a proof to be understandable, we should not forget that people have different ways of thinking about proofs, and so there is not going to be a one-size-fits-all solution. It also suggests that providing multiple ways of conceptualising problems should help with understandability.

In Grebing's thesis [Gre19[Gre19]Grebing, Sarah CaeciliaUser Interaction in Deductive Interactive Program Verification (2019)], a set of focus groups are conducted to ask a set of users with a variety of experience-levels in Isabelle and KeY, to reflect on the user interfaces. One of her main findings was that due to the extensive levels of automation in the proving process, there can arise a 'gap' between the user's model of the proof state and the proof state created through the automation. Grebing then provides a bridge for this gap in the form of a proof scripting language and user interface for the KeY prover at a higher level of abstraction than the existing interface. Grebing also provides a review of other empirical studies conducted on the user interfaces of proof assistants [Gre19 ยง6.2.0].

2.7. Human-like reasoning

How should a prover work to produce human-like mathematical reasoning? The easiest answer is: however humans think it should reason!

The very earliest provers such as the Boyer-Moore theorem prover [BM73[BM73]Boyer, Robert S.; Moore, J. StrotherProving Theorems about LISP Functions (1973)IJCAI, BM90[BM90]Boyer, Robert S; Moore, J StrotherA theorem prover for a computational logic (1990)International Conference on Automated Deduction, BKM95[BKM95]Boyer, Robert S; Kaufmann, Matt; et al.The Boyer-Moore theorem prover and its interactive enhancement (1995)Computers & Mathematics with Applications] take this approach to some extent; the design being steered through a process of introspection on how the authors would prove theorems. Although with their 'waterfall' architecture, the main purpose is to prove theorems automatically, rather than creating proofs that a human could follow. Indeed Robinson's machine-like resolution method [BG01[BG01]Bachmair, Leo; Ganzinger, HaraldResolution theorem proving (2001)Handbook of automated reasoning] was such a dominant approach that Bledsoe titled his paper non-resolution theorem proving [Ble81[Ble81]Bledsoe, Woodrow WNon-resolution theorem proving (1981)Readings in Artificial Intelligence]. Another early work on human-oriented reasoning is that of Nevins [Nev74[Nev74]Nevins, Arthur JA human oriented logic for automatic theorem-proving (1974)Journal of the ACM (JACM)], similar to this theis, Nevins is motivated by the desire to make proofs more understandable to mathematicians. Some examples of prover automation that are designed to perform moves that a human would do are grind for PVShttp://pvs.csl.sri.com/ [SORS01[SORS01]Shankar, Natarajan; Owre, Sam; et al.PVS prover guide (2001)Computer Science Laboratory, SRI International, Menlo Park, CA] and the waterfall algorithm in ACL2https://www.cs.utexas.edu/users/moore/acl2/ [KMM13[KMM13]Kaufmann, Matt; Manolios, Panagiotis; et al.Computer-aided reasoning: ACL2 case studies (2013)].

My own journey into this field started with reading the work of Gowers and Ganesalingam (G&G) in their Robot prover [GG17[GG17]Ganesalingam, Mohan; Gowers, W. T.A fully automatic theorem prover with human-style output (2017)J. Automated Reasoning]A working fork of this can be found at https://github.com/edayers/robotone.. G&G's motivation was to find a formal system that better represented the way that a human mathematician would solve a mathematics problem, demonstrating this through the ability to generate realistic natural-language write-ups of these proofs. The system made use of a natural-deduction style hierarchical proof-state with structural sharing. The inference rules (called 'moves') on these states and the order in which they were invoked were carefully chosen through an introspective process.

A different approach to exploring human-like reasoning is by modelling the process of mathematical discourse. Pease, Cornelli, Martin, et al [CMM+17[CMM+17]Corneli, Joseph; Martin, Ursula; et al.Modelling the way mathematics is actually done (2017)Proceedings of the 5th ACM SIGPLAN International Workshop on Functional Art, Music, Modeling, and Design, PLB+17[PLB+17]Pease, Alison; Lawrence, John; et al.Lakatos-style collaborative mathematics through dialectical, structured and abstract argumentation (2017)Artificial Intelligence] have investigated the use of graphical discourse models of mathematical reasoning. In this thesis, however I have restricted the scope to human-like methods for solving simple lemmas that can produce machine-checkable proofs.

Another key way in which humans reason is through the use of diagrams and alternate representations of things. A prima facie unintuitive result such as snaps together when presented with the appropriate representation Figure 2.29. Some recent work investigating and automating this process is the rep2rep project [RSS+20[RSS+20]Raggi, Daniel; Stapleton, Gem; et al.How to (Re)represent it? (2020)32nd IEEE International Conference on Tools with Artificial Intelligence]. Jamnik's previous work also explores how one can perform automated reasoning on the domain of diagrams [Jam01[Jam01]Jamnik, MatejaMathematical Reasoning with Diagrams: From Intuition to Automation (2001)]. This is an important feature of general human-like reasoning, however I will not explore representations further in this thesis.

Figure 2.29.

A visual representation of summing the first integers with counters. The lower black triangle's rows comprise , , , , . From which a human can quickly derive .

2.7.1. Higher levels of abstraction

There have been many previous works which add higher-level abstraction layers atop an existing prover with the aim of making a prover that is more human-like.

Archer et al. developed the TAME system for the PVS prover [AH97[AH97]Archer, Myla; Heitmeyer, ConstanceHuman-style theorem proving using PVS (1997)International Conference on Theorem Proving in Higher Order Logics]. Although they were focussed on proving facts about software rather than mathematics, a lot of the goals are similar. TAME makes use of a higher abstraction level. However it is only applied to reasoning about timed automatahttps://en.wikipedia.org/wiki/Timed_automaton and doesn't include a user study.

As part of the auto2 prover tactic for Isabelle, Zhan [Zha16[Zha16]Zhan, BohuaAUTO2, a saturation-based heuristic prover for higher-order logic (2016)International Conference on Interactive Theorem Proving] developed a high-level proof script syntax to guide the automation of auto2. A script takes the form of asserting several intermediate facts for the prover to prove before proving the main goal. This script is used to steer the auto2 prover towards proving the result. This contrasts with tactic-based proof and structural scripts (e.g. Isar [Wen99[Wen99]Wenzel, MakariusIsar-a generic interpretative approach to readable formal proof documents (1999)TPHOLs]) which are instead instructions for chaining together tactics. With the auto2 style script, it is possible to omit a lot of the detail that would be required by tactic-based scripts, since steps and intermediate goals that are easy for the automation to solve can be omitted entirely.

2.7.2. Proof planning

Proof planning originated with Bundy [Bun88[Bun88]Bundy, AlanThe use of explicit plans to guide inductive proofs (1988)International conference on automated deduction, Bun98[Bun98]Bundy, AlanProof planning (1998)] and is the application of performing a proof with respect to a high-level plan (e.g., I am going to perform induction then simplify terms) that is generated before low-level operations commence (performing induction, running simplification algorithms). The approach follows the general field of AI planning.

AI planning in its most general conception [KKY95[KKY95]Kambhampati, Subbarao; Knoblock, Craig A; et al.Planning as refinement search: A unified framework for evaluating design tradeoffs in partial-order planning (1995)Artificial Intelligence] is the process of searching a graph G using plan-space rather than by searching it directly. In a typical planning system, each point in plan-space is a DAGDirected Acyclic Graph of objects called ground operators or methods, each of which has a mapping to paths in G. Each ground operator is equipped with predicates on the vertices of G called pre/post-conditions. Various AI planning methods such as GRAPHPLAN [BF97[BF97]Blum, Avrim L; Furst, Merrick LFast planning through planning graph analysis (1997)Artificial intelligence] can be employed to discover a partial ordering of these methods, which can then be used to construct a path in G. This procedure applied to the problem of finding proofs is proof planning. The main issue with proof planning [Bun02[Bun02]Bundy, AlanA critique of proof planning (2002)Computational Logic: Logic Programming and Beyond] is that it is difficult to identify sets of conditions and methods that do not cause the plan space to be too large or disconnected. However, in this paper we are not trying to construct plans for entire proofs, but just to model the thought processes of humans when solving simple equalities. A comparison of the various proof planners is provided by Dennis, Jamnik and Pollet [DJP06[DJP06]Dennis, Louise A; Jamnik, Mateja; et al.On the Comparison of Proof Planning Systems:, lambdaCLAM, ฮฉmega and IsaPlanner (2006)Proceedings of the 12th Symposium on the Integration of Symbolic Computation and Mechanized Reasoning].

Proof planning in the domain of finding equalities frequently involves a technique called rippling [BSV+93[BSV+93]Bundy, Alan; Stevens, Andrew; et al.Rippling: A heuristic for guiding inductive proofs (1993)Artificial Intelligence, BBHI05[BBHI05]Bundy, Alan; Basin, David; et al.Rippling: meta-level guidance for mathematical reasoning (2005)], in which an expression is annotated with additional structure determined by the differences between the two sides of the equation that directs the rewriting process. In our system we avoid using rippling because of our concern for generality: for finding chains of equalities, subtasks achieve similar results and are less tied to particular domains.

Another technique associated with proof planning is the concept of proof critics [Ire92[Ire92]Ireland, AndrewThe use of planning critics in mechanizing inductive proofs (1992)International Conference on Logic for Programming Artificial Intelligence and Reasoning]. Proof critics are programs which take advantage of the information from a failed proof plan to construct a new, amended proof plan. An interactive version of proof critics has also been developed [IJR99[IJR99]Ireland, Andrew; Jackson, Michael; et al.Interactive proof critics (1999)Formal Aspects of Computing].

Another general AI system that will be relevant to this thesis is hierarchical task networks [MS99[MS99]Melis, Erica; Siekmann, JรถrgKnowledge-based proof planning (1999)Artificial Intelligence, Tat77[Tat77]Tate, AustinGenerating project networks (1977)IJCAI] which are used to drive the behaviour of artificial agents such as the ICARUS architecture [LCT08[LCT08]Langley, Pat; Choi, Dongkyu; et al.Icarus userโ€™s manual (2008)]. Starting tasks are broken down into subtasks, which are then used to find fine-grained methods for achieving the original tasks.

2.8. Natural language for formal mathematics

In this section I will survey the background and related work on using natural language to generate proofs. The material in this chapter will be used in Section 3.6 and Chapter 6.

2.8.1. Natural language generation in a wider context

Data-to-text natural language generation (NLG) is a subfield of natural language processing (NLP) that focusses on the problem of computing intelligible natural language discourses and text from some non-textual object (without a human in the loop!). An example is producing an English description of the local weather forecast from meteorological data. NLG techniques can range from simple 'canned text' and 'mail-merge' applications right up to systems with aspirations of generality such as modern voice recognition in smartphones.

There are a wide variety of architectures available for modern NLG [GK18[GK18]Gatt, Albert; Krahmer, EmielSurvey of the state of the art in natural language generation: Core tasks, applications and evaluation (2018)Journal of Artificial Intelligence Research], however they usually carry a modular structure, with a backbone [RD00[RD00]Reiter, Ehud; Dale, RobertBuilding natural language generation systems (2000)] being split in to three pipeline stages as shown in Figure 2.30.

Figure 2.30.

Outline of a common architecture for general NLG systems.

  • Macro-planner or discourse planner: dictates how to structure the general flow of the text. That is, serialising the input data. These often take the form of 'expert systems' with a large amount of domain specific knowledge encoded

  • Micro-planner: determines how the stream of information from the macro-planner should be converted in to individual sentences, how sentences should be structured and determining how the argument should 'flow'.

  • Realiser: produces the final text from the abstracted output of the micro-planner. For example, applying punctuation rules and choosing the correct conjugations.

These choices of stages are mainly motivated through a desire to reuse code and to separate concerns (a realiser does not need to know the subject of the text it is correcting the punctuation form).

An alternative approach to the one outlined above is to use statistical methods for natural language generation. The advent of scalable machine learning (ML) and neural networks (NNs) of the 2010s has gained dominance in many NLG tasks such as translation and scene description. The system developed for this work in Section 3.6 is purely classical, with no machine learning component. In the context of producing simple write-ups of proofs, there will likely be some gains from including ML, but it is not clear that a statistical approach to NLG is going to assist in building understandable descriptions of proofs, because it means that there is no way to confirm whether the description generated by a black-box NLG component is going to be related to the input.

2.8.2. Natural language generation for mathematics

The first modern study of the linguistics of natural language mathematics is the work of Ranta [Ran94[Ran94]Ranta, AarneSyntactic categories in the language of mathematics (1994)International Workshop on Types for Proofs and Programs, Ran95[Ran95]Ranta, AarneContext-relative syntactic categories and the formalization of mathematical text (1995)International Workshop on Types for Proofs and Programs] concerning the translation between dependent type theory and natural language and I will use some of his insights in Section 3.6. Ganesalingam's thesis [Gan10[Gan10]Ganesalingam, MohanThe language of mathematics (2010)] is an excellent reference for understanding the linguistics of mathematics in general, however it is more concerned with natural language input.

There have been numerous previous attempts at creating natural language output from a theorem prover: Felty-Miller [FM87[FM87]Felty, Amy; Miller, DaleProof explanation and revision (1987)], Holland-Minkley et al within the NuPrl proverhttps://nuprl.org [HBC99[HBC99]Holland-Minkley, Amanda M; Barzilay, Regina; et al.Verbalization of High-Level Formal Proofs. (1999)AAAI/IAAI], and also in Theorema [BCJ+06[BCJ+06]Buchberger, Bruno; CrวŽciun, Adrian; et al.Theorema: Towards computer-aided mathematical theory exploration (2006)Journal of Applied Logic]. A particularly advanced NLG for provers was Proverb [HF97[HF97]Huang, Xiaorong; Fiedler, ArminProof Verbalization as an Application of NLG (1997)IJCAI (2)] for the ฮฉmega theorem prover [BCF+97[BCF+97]Benzmรผller, Christoph; Cheikhrouhou, Lassaad; et al.Omega: Towards a Mathematical Assistant (1997)Automated Deduction - CADE-14], this system's architecture uses the pipeline in Figure 2.30 and takes as input a proof term generated by the ฮฉmega toolchain and outputs a natural language sentence.

The process of synthesising natural language is difficult in the general case. But as Gowers and Ganesalingam (henceforth G&G) [GG17[GG17]Ganesalingam, Mohan; Gowers, W. T.A fully automatic theorem prover with human-style output (2017)J. Automated Reasoning] note, the language found in mathematical proofs is much more restricted than a general English text. At its most basic, a natural language proof is little more than a string of facts from the assumptions to the conclusion. There is no need for time-sensitive tenses or other complexities that arise in general text. Proofs are written this way because mathematical proofs are written to be checked by a human and so a uniformity of prose is used that minimises the chance of 'bugs' creeping in. This, combined with a development calculus designed to encourage human-like proof steps, makes the problem of creating mathematical natural language write-ups much more tenable. I will refer to these non-machine-learning approaches as 'classical' NLG.

A related problem worth mentioning here is the reverse process of NLG: parsing formal proofs and theorem statements from a natural language text. The two problems are interlinked in that they are both operating on the same grammar and semantics, but parsing raises a distinct set of problems to NLG, particularly around ambiguity [Gan10 ch. 2]. Within mathematical parsing there are two approaches. The first approach is controlled natural language [Kuh14[Kuh14]Kuhn, TobiasA survey and classification of controlled natural languages (2014)Computational linguistics] as practiced by ForTheL [Pas07[Pas07]Paskevich, AndreiThe syntax and semantics of the ForTheL language (2007)] and Naproche/SAD [CFK+09[CFK+09]Cramer, Marcos; Fisseni, Bernhard; et al.The naproche project controlled natural language proof checking of mathematical texts (2009)International Workshop on Controlled Natural Language]. Here, a grammar is specified to parse text that is designed to look as close to a natural langauge version of the text as possible. The other approach (which I will not make use of in this thesis) is in using machine learning techniques, for example the work on parsing natural mathematical texts is the work of Stathopoulos et al [ST16[ST16]Stathopoulos, Yiannos A; Teufel, SimoneMathematical information retrieval based on type embeddings and query expansion (2016)COLING 2016, SBRT18[SBRT18]Stathopoulos, Yiannos; Baker, Simon; et al.Variable Typing: Assigning Meaning to Variables in Mathematical Text (2018)NAACL-HLT 2018].

Chapter 3
A development calculus

Now that we have reviewed the requisite background material, I can define the moving parts of a human-like theorem prover. The driving principle is to find ways of representing proofs at the same level of detail that a human mathematician would use to communicate to colleagues.

The contributions of this chapter are:

  • The Box datastructure, a development calculus (Section 3.3) designed to better capture how humans reason about proofs while also being formally sound.

  • A set of transformations (moves) on Box which preserve this soundness (Section 3.5).

  • A natural language write-up component converting proof objects created with this layer to an interactive piece of text (Section 3.6).

  • In the supplementary Appendix A, an 'escape hatch' from the Box datastructure to a metavariable-oriented goal state system as used by Lean (Section 3.4.4, Appendix A), this enables compatibility between Box-style proofs and existing automation and verification within the proof assistant.

I wish to make sure that the system integrates with an existing proof assistant (in this case Lean). This is because, by plugging in to an existing prover, it is possible to gain leverage by utilising the already developed infrastructure for that prover such as parsers, tactics and automation. Using an existing prover also means that the verification of proofs can be outsourced to the prover's kernel.

The first challenge is to find a suitable way of determining what it means to be 'human-like'. This is the first research question of Section 1.2 and I provided a review in Section 2.7. Humans think differently to each other, and I do not wish to state that there is a 'right' way to perform mathematics. However I do wish to argue that there are certain ways in which the current methods for performing ITP should be closer to the general cluster of ways in which humans talk about and solve problems.

In this chapter I will investigate some ways in which the inference rules that provers take could be made more human-like and then introduce a new proving abstraction layer, HumanProof, written in the Lean 3 theorem prover, implementing these ideas. Later, in Chapter 6 I will gather thoughts and ratings from real mathematicians about the extent to which the developed system achieves these goals.

In Section 3.1, I will first present an example proof produced by a human to highlight the key features of 'human-like' reasoning that I wish to emulate. Then in Section 3.2 I will give an overview of the resulting designs and underline the primary design decisions and the evidence that drives them. In Section 3.4 I will provide the details and theory of how the system works through defining the key Box structure and 'moves' Boxes as well as how to run standard tactics within Boxes (Section 3.4.4). This theoretical basis will then be used to define the human-like moves in Section 3.5. Then, I will detail the natural language generation pipeline for HumanProof in Section 3.6.

3.1. Motivation

3.1.1. The need for human-like systems

In Section 1.1 I noted that non-specialist mathematicians have yet to widely accept proof assistants despite the adoption of other tools such as computer algebra systems. One way in which to improve this situation is to reduce the cost of learning to use proof assistants through making the way in which they process proofs more similar to how a human would. Doing so would reduce the learning curve for a new user by making the proofs more closely match what the mathematician already knows.

This rules out many automated reasoning methods such as resolution [BG01[BG01]Bachmair, Leo; Ganzinger, HaraldResolution theorem proving (2001)Handbook of automated reasoning][Ble81]Bledsoe, Woodrow WNon-resolution theorem proving (1981)Readings in Artificial IntelligenceCompare with non-resolution theorem proving [Ble81] discussed further in Section 2.7.. This is because typically the original statement of the proposition to be proved will be first reduced to a normal form and mechanically manipulated with a small set of inference rules. The resulting proof will be scarcely recognisable to a mathematician as a proof of the proposition, even if it is accepted by the kernel of a proof assistant. As discussed in Section 1.1, Section 2.6 and as will be confirmed in Chapter 6, mathematicians care not just about a certificate that a statement is correct but also care about the way in which the statement is correct.

Given some new way of creating proofs; how can we determine whether these created proofs are more 'human-like' than some other system? The way I propose here is to require that the program be able to imitate the reasoning of humans at least well enough to produce convincing natural language write-ups of the proofs that it generates and then to test the convincingness of these write-ups through asking mathematicians. This approach is shared by the previous work of Gowers and Ganesalingam [GG17[GG17]Ganesalingam, Mohan; Gowers, W. T.A fully automatic theorem prover with human-style output (2017)J. Automated Reasoning] (henceforth abbreviated G&G), they use a similar framework to the HumanProof system presented in this thesis to produce natural language write-ups of proofs for some lemmas in the domain of metric space topology. The work presented in this thesis expands significantly on the work of G&G.

3.1.2. Modelling human-like reasoning

Building on the background where I explored the literature on the definition of 'human-like' (Section 2.7) and 'understandable' (Section 2.6.1) proofs, my goal in this section is find some specific improvements to the way in which computer aided mathematics is done.

One of the key insights of Gowers and Ganesalingam is that humans reason with a different 'basis' of methods than the logical operations and tactics that are provided to the user of an ITP. For example, a hypothesis such as a function being continuous expands to a formula (3.38) with interlaced quantifiers.

(3.1).

Definition of a continuous function for metric spaces , . Here is the distance metric for or .

However in a mathematical text, if one needs to prove the hypothesis that is continuous will be applied in one go. Whereas in an ITP this process will need to be separated in to numerous steps.

Another example with the opposite problem are the automated tactics such as the tableaux prover blast [Pau99[Pau99]Paulson, Lawrence CA generic tableau prover and its integration with Isabelle (1999)Journal of Universal Computer Science]. The issue with these is that their process is opaque and leaves little explanation for why they succeed or fail. They may also step over multiple stages that a human would rather see spelled out in full. The most common occurrence of this is on definition expansion; two terms may be identical modulo definition expansion but a proof found in a textbook will often take the time to point out when such an expansion takes place.

This points towards creating a new set of 'moves' for constructing proofs that are better suited for creating proofs by corresponding better to a particular reasoning step as might be used by a human mathematician.

3.1.3. Structural sharing

When humans reason about mathematical proofs, they will often flip between forwards reasoning and backwards reasoningBroadly speaking, forwards reasoning is any mode of modifying the goal state that acts only on the hypotheses of the proof state. Whereas backwards reasoning modifies the targets.. The goal-centric proof state used by ITPs can make this kind of reasoning difficult. In the most simple example, suppose that the goal is P โˆง Q โŠข Q โˆง PThat is, given the hypothesis P โˆง Q, prove Q โˆง P where P and Q are propositions and โˆง is the logical-and operation.. One solution is to perform a split on the target to produce P โˆง Q โŠข Q and P โˆง Q โŠข P. However, performing a conjunction elimination on the P โˆง Q hypothesis will then need to be performed on both of the new targets. This is avoided if the elimination is performed before splitting P โˆง Q. In this simplified example it is clear which order the forwards and backwards reasoning should be performed. But in more complex proofs, it may be difficult to see ahead how to proceed. A series of backwards reasoning steps may provide a clue as to how forwards reasoning should be applied. The usual way that this problem is solved is for the human to edit an earlier part of the proof script with the forwards reasoning step on discovering this. I reject this solution because it means that the resulting proof script no longer represents the reasoning process of the creator, the fact that the forwards reasoning step was motivated by the goal state at a later point is lost.

The need to share structure among objects in the name of efficiency has been studied at least as far back as Boyer and Moore [BM72[BM72]Boyer, R. S.; Moore, J. S.The sharing structure in theorem-proving programs (1972)Machine intelligence] however the motivation behind introducing it here is purely for the purpose of creating human-like proofs.

The solution that I use here is to use a different representation of the goal state that allows for structural sharing. This alteration puts the proof state calculus more in the camp of OLEG [McB00[McB00]McBride, ConorDependently typed functional programs and their proofs (2000)], and the G&G prover. The details of the implementation of structural sharing are presented later in Section 3.5.4.

Structural sharing can also be used to implement backtracking and counterfactuals. For example, suppose that we need to prove A โŠข P โˆจ Q, one could apply the โˆจ-left-introduction rule P โ‡’ P โˆจ Q, but then one might need to backtrack later in the event that really the right-introduction rule Q โ‡’ P โˆจ Q should be used instead. Structural sharing lets us split a target into two counterfactuals.

3.1.4. Verification

One of the key benefits of proof assistants is that they can rigorously check whether a proof is correct. This distinguishes the HumanProof project from the prior work of G&G, where no formal proof checking was present. While I have argued Section 2.6And will later be suggested from the results of my user study in Section 6.6. that this guarantee of correctness is less important for attracting working mathematicians, I wish to demonstrate here that there need not be a conflict between having a prover which is easy for non-specialists to understand and which is formally verified.

3.1.5. What about proof planning?

Proof planning is the process of creating proofs using abstract proof methods that are assembled with the use of classical AI planning algorithms[RN10]Russell, Stuart J.; Norvig, PeterArtificial Intelligence - A Modern Approach (2010)An introduction to classical AI planning can be found in Russel and Norvig [RN10 Pt.III]. The concept of proof planning was first introduced by Bundy [Bun88[Bun88]Bundy, AlanThe use of explicit plans to guide inductive proofs (1988)International conference on automated deduction]. A review of proof planning is given in Section 2.7.2.

The primary issue with proof planning is that there is a sharp learning curve. In order to get started with proof plans, one must learn a great deal of terminology and a new way of thinking about formalised mathematics. Bundy presents his own critique of proof planning [Bun02[Bun02]Bundy, AlanA critique of proof planning (2002)Computational Logic: Logic Programming and Beyond] which goes in to more detail on this point.

The study of proof planning has fallen out of favour for the 21st century so far, possibly in relation to the rise of practical SMT solvers such as E proverhttps://wwwlehre.dhbw-stuttgart.de/~sschulz/E/E.html [SCV19[SCV19]Schulz, Stephan; Cruanes, Simon; et al.Faster, Higher, Stronger: E 2.3 (2019)Proc. of the 27th CADE, Natal, Brasil] and Z3 proverhttps://github.com/Z3Prover/z3 [MB08[MB08]de Moura, Leonardo; Bjรธrner, NikolajZ3: An efficient SMT solver (2008)International conference on Tools and Algorithms for the Construction and Analysis of Systems] and their incorporation in to ITP through the use of 'hammer' software like Isabelle's Sledgehammer [BN10[BN10]Bรถhme, Sascha; Nipkow, TobiasSledgehammer: judgement day (2010)International Joint Conference on Automated Reasoning]. I share a great deal of the ideals that directed proof planning and the equational reasoning system presented in Chapter 4 is inspired by it. Although I wish to take a more practical stance; the additional abstractions that are placed atop the underlying tactic system should be transparent, in that they are understandable without needing to be familiar with proof planning and with easy 'escape hatches' back to the tactic world if needed.

3.2. Overview of the software

The software implementation of the work presented in this thesis is called 'HumanProof' and is implemented using the Lean 3 proverhttps://leanprover-community.github.io. The source code can be found here coming soon. In this section I will give a high-level overview of the system and some example screenshots. A general overview of the system and how it relates to the underlying Lean theorem prover is shown in Figure 3.2.

Figure 3.2.

High-level overview of the main modules that comprise the HumanProof system and these interface with Lean, ProofWidgets and the VSCode text editor. The green parts of the diagram are contributions given in this thesis. ProofWidgets (Chapter 5) was spun out from HumanProof for use as a general-purpose GUI system so that it could be used in other community projects (see [/thesis/widgets#community-built-widgets]).

Given a theorem to prove, HumanProof is invoked by indicating a special begin [hp] script block in the proof document (see Figure 3.3) is taken to be the lemma to be solved. This initialises HumanProof's Box datastructure with the assumptions and target proposition of the proof. The initial state of the prover is shown in the goal view of the development environment, here called the Info View. Using the ProofWidgets framework (developed in Chapter 5), this display of the state is interactive: the user can click at various points in the document to determine their next steps. Users can then manipulate this datastructure either through the use of interactive buttons or by typing commands in to the proof script in the editor. In the event of clicking the buttons, the commands are immediately added to the proof script sourcefile as if the user had typed it themselves (The left panel of Figure 3.3). In this way, the user can create proofs interactively whilst still preserving the plaintext proof document as the single-source-of-truth; this ensures that there is no hidden state in the interactive view that is needed for the Lean to reconstruct a proof of the statement. While the proof is being created, the system also produces a natural language write-up of the proof (Section 3.6) that is displayed alongside the proof state, as the proof progresses, users can see the natural language proof get longer too.

The system also comes equipped with a module for solving equalities using the 'subtasks algorithm' (Chapter 4). The subtasks algorithm uses a hierarchical planning system to produce an equality proof that is intended to match the way that a human would create the proof, as opposed to a more machine like approach such as E-matching [BN98[BN98]Baader, Franz; Nipkow, TobiasTerm rewriting and all that (1998) Ch. 10]. The output of this subsystem is a chain of equations that is inserted in to the natural language writeup.

Figure 3.3.

Screenshot of HumanProof in action on a test lemma. To the left is the code editor. The user invokes HumanProof with the begin [hp] command. The blue apply H button can be clicked to automatically insert more proofscript.

3.3. The Box datastructure

At the heart of HumanProof is a development calculus using a datastructure called Box. The considerations from Section 3.1.3 led to the development of an 'on-tree' development calculus. Rather than storing a flat list of goals and a metavariable context alongside the result, the entire development state is stored in a recursive tree structure which I will call a box tree. The box tree, to be defined in Section 3.3.2, stores the proof state as an incomplete proof tree with on-tree metavariable declarations which is then presented to the user as a nested set of boxes.

As we shall investigate in Section 3.3.5, this design puts the Box calculus in the company of McBride's OLEG [McB00[McB00]McBride, ConorDependently typed functional programs and their proofs (2000)] and G&G's Robot [GG17[GG17]Ganesalingam, Mohan; Gowers, W. T.A fully automatic theorem prover with human-style output (2017)J. Automated Reasoning]. A more abstract treatment can be found in the work of Sterling and Harper [SH17[SH17]Sterling, Jonathan; Harper, RobertAlgebraic Foundations of Proof Refinement (2017)arXiv preprint arXiv:1703.05215], implemented within the RedPRL theorem proverhttps://redprl.org.

The novel contribution of the development calculus developed here is that it works within a Spiwack-style [Spi11[Spi11]Spiwack, ArnaudVerified computing in homological algebra, a journey exploring the power and limits of dependent type theory (2011)]See Section 2.4 for more background information. flat metavariable context model as is used in Lean. That is, it is a layer atop the existing metavariable context system detailed in Section 2.4.3. This means that it is possible for the new calculus to work alongside an existing prover, rather than having to develop an entirely new one as was required for OLEG and Robot. This choice opens many possibilities, now one can leverage many of the advanced features that Lean offers such as a full-fledged modern editor and metaprogramming toolchain [EUR+17[EUR+17]Ebner, Gabriel; Ullrich, Sebastian; et al.A metaprogramming framework for formal verification (2017)Proceedings of the ACM on Programming Languages]. This approach also reduces some of the burden of correctness pressed upon alternative theorem provers, because we can outsource correctness checking to the Lean kernel. Even with this protection, it is still frustrating when a development calculus produces an incorrect proof and so I will also provide some theoretical results here and in Appendix A on some conditions that must be met for a proof step to be sound. The design of the Box calculus is also independent of any particular features of Lean, and so a variant of it may also be implemented in other systems.

The central datatype is the Box. This performs the role of holding a partially constructed proof object and a representation of the goals that remain to be solved. As discussed in Section 3.1.3, the purpose is to have a structurally shared tree of goals and assumptions that is also compatible with Lean tactics.

3.3.1. An example of Box in action.

Boxes are visualised as a tree of natural-deduction-style goal states. Let's start with a minimal example to get a feel for the general progression of a proof with the Box architecture. Let's prove P โˆจ Q โ†’ Q โˆจ P using Boxes. The initial box takes the form (3.4).

(3.4).

?๐‘ก : P โˆจ Q โ†’ Q โˆจ P

And we can read (3.4) as saying "we need to show P โˆจ Q โ†’ Q โˆจ P". The ?๐‘ก is the name of the metavariable that the proof of this will be assigned to. The first action is to perform an intro step to get (3.5).

(3.5).
๐‘• : P โˆจ Q

?๐‘ก: Q โˆจ P

To be read as "Given P โˆง Q, we need to show Q โˆจ P". So far the structure is the same as would be observed on a flat goal list structure. The idea is that everything above a horizontal line is a hypothesis (something that we have) and everything below is a target (something we want). When all of the targets are solved, we should have a valid proof of the original target. At this point, we would typically perform an elimination step on โ„Ž (e.g., cases โ„Ž in Lean) (3.6).

(3.6).

๐‘•โ‚ : P

?๐‘กโ‚: Q โˆจ P
๐‘•โ‚‚ : Q

?๐‘กโ‚‚: Q โˆจ P

Here in (3.6) we can see nested boxes, each nested box below the horizontal line must be solved to solve the parent box. However in the box architecture there is an additional move available; a branching on the goal (3.7).

(3.7).
๐‘• : P โˆง Q


?๐‘กโ‚ : Q
โ‹

?๐‘กโ‚‚ : P

If a pair of boxes appear with a โ‹ between them, then either of the boxes can be solved to solve the parent box. And then we can eliminate h on the branched box:

(3.8).

๐‘•โ‚ : P


?๐‘กโ‚โ‚ : Q
โ‹

?๐‘กโ‚โ‚‚ : P
๐‘•โ‚‚ : Q


?๐‘กโ‚‚โ‚ : Q
โ‹

?๐‘กโ‚‚โ‚‚ : P

Now at this point, we can directly match ๐‘•โ‚ with ?๐‘กโ‚โ‚‚ and ๐‘•โ‚‚ with ?๐‘กโ‚‚โ‚ to solve the box. Behind the scenes, the box is also producing a result proof term that can be checked by the proof assistant's kernel.

3.3.2. Definition of Box

The above formulation is intended to match with the architecture designed in G&G, so that all of the same 'moves' developed in G&G are available. Unlike G&G, the system also interfaces with a flat goal-based development calculus, and so it is possible to use both G&G moves and Lean tactics within the same development calculus. To do this, let's formalise the system presented above in Section 3.3.1 with the following Box datatype (3.9). Define a Binder := (name : Name) ร— (type : Expr) to be a name identifier and a type with notation (nameโˆถtype), using a smaller colon to keep the distinction from a meta-level type annotation.

(3.9).

Inductive definition of Box.

Box ::=
| โ„ (x : Binder) (b : Box) : Box
| ๐’ฏ (m : Binder) (b : Box) : Box
| ๐’ญ (r : Expr) : Box
| ๐’œ (bโ‚ : Box) (r : Binder) (bโ‚‚ : Box) : Box
| ๐’ช (bโ‚ : Box) (bโ‚‚ : Box) : Box
| ๐’ฑ (x : Binder) (t : Expr) (b : Box) : Box

I will represent instances of the Box type with a 2D box notation defined in (3.10) to make the connotations of the datastructure more apparent.

(3.10).

Visualisation rules for the Box type. Each rule takes a pair ๐ฟ โŸผ ๐‘… where ๐ฟ is a constructor for Box and ๐‘… is the visualisation. The idea is that everything above the horizontal line in the box is a hypothesis or a value that we have. Everything below a line within a box is a target, something that we need to find through the proof discovery process. This visualisation is also implemented in Lean using the widgets framework presented in Section 5.7.

โ„ (๐‘ฅ โˆถ ฮฑ) ๐‘ โŸผ
๐‘ฅ : ฮฑ
...๐‘
๐’ฏ (๐‘ฅ โˆถ ฮฑ) ๐‘ โŸผ

?๐‘ฅ : ฮฑ
...๐‘
๐’ญ ๐‘Ÿ โŸผ
โ–ธ ๐‘Ÿ
๐’œ ๐‘โ‚ (๐‘ฅ โˆถ ฮฑ) ๐‘โ‚‚ โŸผ

[๐‘ฅ :=]
...๐‘โ‚
...๐‘โ‚‚
๐’ช ๐‘โ‚ ๐‘โ‚‚ โŸผ

...๐‘โ‚
โ‹
...๐‘โ‚‚
๐’ฑ (๐‘ฅ โˆถ ฮฑ) ๐‘ก
...๐‘
โŸผ
๐‘ฅ := ๐‘ก
...๐‘

These visualisations are also presented directly to the user through the use of the widgets UI framework presented in Chapter 5. The details of this visualisation are given in Section 5.7.

To summarise the roles for each constructor:

  • โ„ ๐‘ฅ ๐‘ is a variable introduction binder, that is, it does the same job as a lambda binder for expressions and is used to introduce new hypotheses and variables.

  • ๐’ฏ ๐‘š ๐‘ is a target binder, it introduces a new metavariable ?๐‘š that the child box depends on.

  • ๐’ญ ๐‘Ÿ is the result box, it depends on all of the variables and targets that are declared above it. It represents the proof term that is returned once all of the target metavariables are solved. Extracting a proof term from a well-formed box will be discussed in Section 3.4.

  • ๐’œ ๐‘โ‚ (๐‘ฅ โˆถ ฮฑ) ๐‘โ‚‚ is a conjunctive pair of boxes. Both boxes have to be solved to complete the proof. Box bโ‚‚ depends on variable ๐‘ฅ. When ๐‘โ‚ is solved, the ๐‘ฅ value will be replaced with the resulting proof term of ๐‘โ‚.

  • ๐’ช ๐‘โ‚ ๐‘โ‚‚ is a disjunctive pair, if either of the child boxes are solved, then so is the total box. This is used to implement branching and backtracking.

  • ๐’ฑ ๐‘ฅ ๐‘ is a value binder. It introduces a new assigned variable.

Boxes also have a set of well-formed conditions designed to follow the typing judgements of the underlying proof-assistant development calculus. This will be developed in Section 3.4.

3.3.3. Initialising and terminating a Box

Given an expression representing a theorem statement P : Expr, โˆ… โŠข P โˆถ Prop, we can initialise a box to solve P as ๐‘โ‚€ := ๐’ฏ (๐‘ก โˆถ P) (๐’ญ ๐‘ก) (3.11).

(3.11).

Initial ๐‘โ‚€ : Box given โŠข P โˆถ Prop.


?๐‘ก : P
โ–ธ ?๐‘ก

In the case that P also depends on a telescope of hypotheses ฮ“ โŠข P โˆถ Prop, these can be incorporated by prepending to the initial ๐‘โ‚€ in (3.11) with an โ„ box for each ๐‘• โˆˆ ฮ“.

Say that a Box is solved when there are no ๐’ฏ-binders remaining in the Box. At this point, the proving process ceases and a proof term and natural language proof may be generated.

3.3.4. Transforming a box

The aim of the game is to solve a box through the use of 'moves'. A move partial functions on boxes Move := Box โ†’ Option Box. In Section 3.3.1 we saw some examples of moves to advance the box state and eventually complete it. A complete set of moves that are implemented in the system will be given in Section 3.5. Some moves will of course be nonsense and not produce sound proofs. In Section 3.4 I will define what it means for a move to be sound and produce a correct proof that can be checked by the ITP's kernel.

3.3.5. Relation to other development calculi

Let's pause now to highlight the similarities and differences of this approach with some other development calculi.

McBride's OLEG [McB00[McB00]McBride, ConorDependently typed functional programs and their proofs (2000)] is the most similar to the design presented here. OLEG 'holes' are functionally the same as metavariables. That is, they are specially tagged variables that will eventually be assigned with expressions. OLEG provides an additional constructor for expressions called 'hole-bindings' or '-bindings'. Because OLEG is a ground-up implementation of a new theorem prover, hole-bindings can be added directly as a constructor for expressions which is not available in Lean (without reimplementing Lean expressions and all of the algorithms upon it)It might be possible to use Lean's expression macro system to implement hole-bindings, but doing so would still require reimplementing a large number of type-context-centric algorithms such as unification [SB01].[SB01]Snyder, Wayne; Baader, FranzUnification theory (2001)Handbook of automated reasoning. These hole-bindings perform the same role as the ๐’ฏ constructor, in that they provide the context of variables that the hole/metavariable is allowed to depend on. But if the only purpose of a hole-binding is to give a context, then why not just explicitly name that context as is done in other theorem provers? The Box architecture given above is intended to give the best of both worlds, in that you still get a shared goal-tree structure without needing to explicitly bind metavariables within the expression tree, instead they are bound in a structure on top of it.

Lean and Coq's proof construction systems make use of the metavariable context approach outlined in Section 2.4. The metavariable context here performs the same role as the ๐’ฏ target boxes, however this set of targets is flattened in to a list structure rather than stored in a tree as in Box. This makes many aspects such as unification easier but means that structural sharing (Section 3.1.3) is lost. In Section 3.4.4 I show that we do not have to forgo use of the algorithms implemented for a flat metavariable structure to use Boxes.

In Isabelle, proofs are constructed through manipulating the proof state directly through an LCF-style [Mil72[Mil72]Milner, RobinLogic for computable functions description of a machine implementation (1972)] kernel of available functionsAs can be seen in the implementation sourcecode.. Schematic variables are used to create partially constructed terms.

Sterling and Harper [SH17[SH17]Sterling, Jonathan; Harper, RobertAlgebraic Foundations of Proof Refinement (2017)arXiv preprint arXiv:1703.05215] provide a category-theoretical theory of partially constructed proofs and use these principles in the implementation of RedPRLhttps://redprl.readthedocs.io/en/latest/. They are motivated by the need to create a principled way performing refinement of proofs in a dependently-typed foundation. They develop a judgement-independent framework for describing development calculi within a category-theoretical setting.

Another hierarchical proof system is HiProof [ADL10[ADL10]Aspinall, David; Denney, Ewen; et al.Tactics for hierarchical proof (2010)Mathematics in Computer Science]. HiProof makes use of a tree to write proofs. The nodes of tree are invocations of inference rules and axioms and an edge denotes the flow of evidence in the proof. These nodes may be grouped to provide varying levels of detail. These hierarchies are used to describe a proof, whereas a Box here describes a partially completed proof and a specification of hypotheses and targets that must be set to construct the proof.

3.4. Creating valid proof terms from a Box

Note that because we are using a trusted kernel, the result of producing an invalid proof with Box is a mere inconvenience because the kernel will simply reject it. However, in order for the Box structure defined in Section 3.3.2 to be useful within a proof assistant such as Lean as motivated by Section 3.1.4, it is important to make sure that given a solved Box will produce a valid proof for the underlying trusted kernel. To do this, I will define a typing judgement ๐‘€;ฮ“ โŠข ๐‘ โˆถ ฮฑ then present a method for extracting a proof term ๐‘€;ฮ“ โŠข ๐‘Ÿ โˆถ ฮฑ from ๐‘ with the same type provided ๐‘ is solved.

3.4.1. Assignability for Boxes

In Section 2.4.2, I introduced the concept of an assignable datastructure for generalising variable-manipulation operations to datatypes other than expressions. We can equip a datatype containing expressions with an assignability structure assign (3.12). This is a variable-context-aware traversal over the expressions present on the datatype. For Box, this traversal amounts to traversing the expressions on each box, while adding to the local context if the subtree is below a binder. The definition of assign induces a definition of variable substitution and abstraction over Boxes.

(3.12).

Definition of assign for Box. See Section 2.4.2 for a description of assignability. The <*> operator is the applicative product for some applicative functor M (see Section 2.2.2). Note that target ๐’ฏ declarations are bound, so for the purposes of assignment they are treated as simple variable binders.

assign (๐‘“ : Context โ†’ Expr โ†’ M Expr) (ฮ“ : Context)
: Box โ†’ M Box
| โ„ ๐‘ฅ ๐‘ โ†ฆ pure โ„ <*> assign ๐‘“ ฮ“ ๐‘ฅ <*> assign ๐‘“ [..ฮ“, ๐‘ฅ] ๐‘
| ๐’ฏ ๐‘š ๐‘ โ†ฆ pure ๐’ฏ <*> assign ๐‘“ ฮ“ ๐‘š <*> assign ๐‘“ [..ฮ“, ๐‘š] ๐‘
| ๐’ญ ๐‘Ÿ โ†ฆ pure ๐’ญ <*> assign ๐‘“ ฮ“ ๐‘Ÿ
| ๐’œ ๐‘โ‚ ๐‘ฅ ๐‘โ‚‚ โ†ฆ pure ๐’œ <*> assign ๐‘“ ฮ“ ๐‘โ‚ <*> assign ๐‘“ ฮ“ ๐‘ฅ <*> assign ๐‘“ [..ฮ“, ๐‘ฅ] ๐‘โ‚‚
| ๐’ช ๐‘โ‚ ๐‘โ‚‚ โ†ฆ pure ๐’ช <*> assign ๐‘“ ฮ“ ๐‘โ‚ <*> assign ๐‘“ ฮ“ ๐‘โ‚‚
| ๐’ฑ ๐‘ฅ ๐‘ก ๐‘ โ†ฆ pure ๐’ฑ <*> assign ๐‘“ ฮ“ ๐‘ฅ <*> assign ๐‘“ ฮ“ ๐‘ก <*> assign ๐‘“ [..ฮ“, ๐‘ฅโ‰”๐‘ก] ๐‘

3.4.2. Typing judgements for Box

In Section 2.4, I defined contexts ฮ“, metavariable contexts ๐‘€. As covered in Carneiro's thesis [Car19[Car19]Carneiro, MarioLean's Type Theory (2019)], Lean's type theory affords a set of inference rules on typing judgements ฮ“ โŠข ๐‘ก โˆถ ฮฑ, stating that the expression ๐‘ก has the type ฮฑ in the context ฮ“. However these inference rules are only defined for expressions ๐‘ก : Expr that do not contain metavariables. In Section A.1, I extend these judgements (A.3), (A.4) to also include expressions containing metavariable contexts ๐‘€;ฮ“ โŠข ๐‘ก โˆถ ฮฑ.

In a similar way, we can repeat this for Box: given contexts ๐‘€ and ฮ“ we can define a typing judgement ๐‘€;ฮ“ โŠข ๐‘ โˆถ ฮฒ where ๐‘ : Box and ฮฒ is a type. The inference rules for this are given in (3.13). These have been chosen to mirror the typings given in Section 2.4.3.

(3.13).

Typing judgement rules for Box. Compare with (A.3) and (A.4) in Section A.1.

๐‘€;(..ฮ“, ๐‘ฅโˆถฮฑ) โŠข ๐‘ โˆถ ฮฒ

โ„-typing
๐‘€;ฮ“ โŠข (โ„ (๐‘ฅโˆถฮฑ), ๐‘) โˆถ (ฮ  (๐‘ฅโˆถฮฑ), ฮฒ)
๐‘€;ฮ“ โŠข ๐‘ก โˆถ ฮฑ

๐’ญ-typing
๐‘€;ฮ“ โŠข ๐’ญ ๐‘ก โˆถ ฮฑ
[..๐‘€, โŸจ๐‘š,ฮฑ,ฮ“โŸฉ];ฮ“ โŠข ๐‘ โˆถ ฮฒ

๐’ฏ-typing
๐‘€;ฮ“ โŠข (๐’ฏ (?๐‘ฅโˆถฮฑ), ๐‘) โˆถ ฮฒ
๐‘€;ฮ“ โŠข ๐‘โ‚ โˆถ ฮฑ
๐‘€;[..ฮ“, (๐‘ฅโˆถฮฑ)] โŠข ๐‘โ‚‚ โˆถ ฮฒ

๐’œ-typing
๐‘€;ฮ“ โŠข (๐’œ ๐‘โ‚ (๐‘ฅโˆถฮฑ) ๐‘โ‚‚) โˆถ ฮฒ
๐‘€;ฮ“ โŠข ๐‘โ‚ โˆถ ฮฑ
๐‘€;ฮ“ โŠข ๐‘โ‚‚ โˆถ ฮฑ

๐’ช-typing
๐‘€;ฮ“ โŠข (๐’ช ๐‘โ‚ ๐‘โ‚‚) โˆถ ฮฑ
๐‘€;ฮ“ โŠข ๐‘ฃ โˆถ ฮฑ
๐‘€;[..ฮ“, (๐‘ฅโˆถฮฑ)] โŠข ๐‘ โˆถ ฮฒ

๐’ฑ-typing
๐‘€;ฮ“ โŠข (๐’ฑ (๐‘ฅโˆถฮฑโ‰”๐‘ฃ), ๐‘) โˆถ ฮฒ

These typing rules have been designed to match the typing rules (A.3) of the underlying proof terms that a Box will produce when solved, as I will show next.

3.4.3. Results of Boxes

The structure of Box is designed to represent a partially complete expression without the use of unbound metavariables. Boxes can be converted to expressions containing unbound metavariables using results : Box โ†’ Set Expr as defined in (3.14).

(3.14).

Definition of results. ๐‘Ÿ[๐‘ฅ] denotes a delayed abstraction (Section A.4) needed in the case that ๐‘Ÿ contains metavariables.

results
: Box โ†’ Set Expr
| โ„ (๐‘ฅโˆถฮฑ) ๐‘ โ†ฆ {(Expr.ฮป (๐‘ฅโˆถฮฑ) ๐‘Ÿ[๐‘ฅ]) for ๐‘Ÿ in results ๐‘}
| ๐’ฏ (๐‘ฅโˆถฮฑ) ๐‘ โ†ฆ results ๐‘
| ๐’ญ ๐‘ก โ†ฆ {๐‘ก}
| ๐’œ ๐‘โ‚ (๐‘ฅโˆถฮฑ) ๐‘โ‚‚ โ†ฆ
{๐‘  for ๐‘  in results โฆƒ๐‘ฅ โ†ฆ ๐‘Ÿโฆ„ ๐‘โ‚‚
for ๐‘Ÿ in results ๐‘โ‚}
| ๐’ช ๐‘โ‚ ๐‘โ‚‚ โ†ฆ results ๐‘โ‚ โˆช results ๐‘โ‚‚
| ๐’ฑ (๐‘ฅโˆถฮฑ) ๐‘ โ†ฆ {(Expr.let ๐‘ฅ ๐‘ ๐‘Ÿ) for ๐‘Ÿ in results ๐‘}

Say that a ๐‘ : Box is solved when there are no remaining ๐’ฏ entries in it. Then in this case, the set of results for ๐‘ will not contain any metavariables and so can be checked by the kernel. In the case that ๐‘ is unsolved, the results of ๐‘ will contain unbound variables for each ๐’ฏ-binder that need to be assigned. The claim to make here is that the typing system I've placed on Boxes in (3.13) is compatible with the results of these expressions as expressed by (3.15).

(3.15).

Statement of the compatibility lemma. That is, take a ๐‘ : Box and ฮฑ : Expr, then if ๐‘ โˆถ ฮฑ in the context ๐‘€;ฮ“ and ๐‘Ÿ : Expr is a result of ๐‘ (3.14); then ๐‘Ÿโˆถฮฑ in the context ๐‘€;ฮ“ with additional metavariables added for each of the targets in ๐‘.

๐‘€;ฮ“ โŠข ๐‘ โˆถ ฮฑ
๐‘Ÿ โˆˆ results ๐‘

[..๐‘€, ..targets ๐‘];ฮ“ โŠข ๐‘Ÿ โˆถ ฮฑ

Here in (3.15), targets ๐‘ is the set of metavariable declarations formed by accumulating all of the ๐’ฏ-binders in ๐‘. (3.15) states that given a box ๐‘ and an expression ๐‘Ÿ that is a result of ๐‘. Then if ๐‘ is a valid box with type ฮฑ then ๐‘Ÿ will type to ฮฑ too in the metavariable context including all of the targets in ๐‘.

We need to derive (3.15) because it ensures that our Box will produce well-typed expressions when solved. Using (3.15), we can find moves m : Box โ†’ Option Box - partial functions from Box to Box - such that ๐‘€;ฮ“ โŠข ๐‘ โˆถ ฮฑ โ‡’ ๐‘€;ฮ“ โŠข m ๐‘ โˆถ ฮฑ whenever ๐‘ โˆˆ dom m. Hence a chain of such moves will produce a result that satisfies the initial goal.

3.4.3.1. Proof of (3.15)

Without loss of generality, we only need prove (3.15) for a ๐‘ : Box with no ๐’ช boxes and a single result [๐‘Ÿ] = results ๐‘. To see why, note that any box containing an ๐’ช can be split as in (3.16) until each Box has one result. Then we may prove (3.15) for each of these in turn.

(3.16).
results(
...๐‘

...๐‘โ‚
โ‹
...๐‘โ‚‚
) = results(
...๐‘

...๐‘โ‚
) โˆช results(
...๐‘

...๐‘โ‚‚
)

Write result ๐‘ to mean this single result ๐‘Ÿ. Performing induction on the typing judgements for boxes, the most difficult is ๐’œ-typing, where we have to show (3.17).

(3.17).

The induction step that must be proven for the ๐’œ-box case of (3.15)

๐‘€;ฮ“ โŠข ๐‘โ‚ โˆถ ฮฑ
๐‘€;[..ฮ“, (๐‘ฅโˆถฮฑ)] โŠข ๐‘โ‚‚ โˆถ ฮฒ
๐‘€';ฮ“ โŠข result ๐‘โ‚ โˆถ ฮฑ
๐‘€';[..ฮ“, (๐‘ฅโˆถฮฑ)] โŠข result ๐‘โ‚‚ โˆถ ฮฒ

๐‘€';ฮ“ โŠข result (๐’œ ๐‘โ‚ (๐‘ฅโˆถฮฑ) ๐‘โ‚‚) โˆถ ฮฒ

Where ๐‘€' := [..๐‘€, ..targets (๐’œ ๐‘โ‚ (๐‘ฅโˆถฮฑ) ๐‘โ‚‚)]. To derive this it suffices to show that result is a 'substitution homomorphism':

(3.18).

result is a substitution homomorphism

๐‘€;ฮ“ โŠข ฯƒ ok

๐‘€;ฮ“ โŠข ฯƒ (result ๐‘) โ‰ก result (ฯƒ ๐‘)

where ฯƒ is a substitutionSee Section 2.4.1. A substitution is a partial map from variables to expressions. in context ฮ“ and โ‰ก is the definitional equality judgement under ฮ“. Because then we have

(3.19).

Here, โฆƒ๐‘ฅ โ†ฆ ๐‘’โฆ„ ๐‘ is used to denote substitution applied to ๐‘. That is, replace each occurrence of ๐‘ฅ in ๐‘ with ๐‘’

๐‘€';ฮ“ โŠข
result (๐’œ ๐‘โ‚ (๐‘ฅโˆถฮฑ) ๐‘โ‚‚)
โ‰ก result (โฆƒ๐‘ฅ โ†ฆ result ๐‘โ‚โฆ„ ๐‘โ‚‚)
โ‰ก โฆƒ๐‘ฅ โ†ฆ result ๐‘โ‚โฆ„ (result ๐‘โ‚‚)
โ‰ก (ฮป (๐‘ฅโˆถฮฑ), result ๐‘โ‚‚) (result ๐‘โ‚)

We can see the substitution homomorphism property of result holds by inspection on the equations of result, observing that each LHS expression behaves correctly. Here is the case for โ„:

(3.20).

result and ฯƒ obey the 'substitution homomorphism' property on the case of โ„. Here ฮป is used to denote the internal lambda constructor for expressions. Note here we are assuming dom ฯƒ โŠ† ฮ“, so ๐‘ฅ โˆ‰ dom ฯƒ, otherwise dom ฯƒ.

๐‘€';ฮ“ โŠข
result (ฯƒ (โ„ (๐‘ฅโˆถฮฑ) ๐‘))
โ‰ก result $ โ„ (๐‘ฅโˆถ(ฯƒ ฮฑ)) (ฯƒ ๐‘)
โ‰ก (ฮป (๐‘ฅโˆถ(ฯƒ ฮฑ)), (result (ฯƒ ๐‘))[๐‘ฅ])
โ‰ก (ฮป (๐‘ฅโˆถ(ฯƒ ฮฑ)), (ฯƒ (result ๐‘))[๐‘ฅ]) -- โˆต induction hypothesis
โ‰ก ฯƒ (ฮป (๐‘ฅโˆถฮฑ), (result ๐‘))
โ‰ก ฯƒ (result (โ„ (๐‘ฅโˆถฮฑ) ๐‘))

This completes the proof of type compatibility (3.15). By using compatibility, we can determine whether a given move m : Box โ†’ Option Box will be sound. Define a move m to be sound when for all ๐‘ โˆˆ dom m we have some ฮฑ such that ๐‘€;ฮ“ โŠข (m ๐‘) โˆถ ฮฑ whenever ๐‘€;ฮ“ โŠข ๐‘ โˆถ ฮฑ.

Hence, to prove a starting propositionOr, in general, a type ฮฑ. P, start with an initial box ๐‘โ‚€ := ๐’ฏ (?tโ‚€โˆถP) (๐’ญ ?tโ‚€). Then if we only map ๐‘โ‚€ with sound moves to produce a solved box ๐‘โ‚™, then each of results ๐‘โ‚™ will always have type ฮฑ and hence will be accepted by Lean's kernel.

Given a move m that is sound on ๐‘, then we can construct a sound move on โ„ (๐‘ฅโˆถฮฑ) ๐‘ too that acts on the nested box ๐‘.

3.4.4. Escape-hatch to tactics

As discussed in Section 2.4.4, many provers, including Lean 3, come with a tactic combinator language to construct proofs through mutating an object called the TacticState comprising a metavariable context and a list of metavariables called the goals. In Section 3.1 I highlighted some of the issues of this approach, but there are many built-in and community-made tactics which can still find use within a HumanProof proof. For this reason, it is important for HumanProof to provide an 'escape hatch' allowing these tactics to be used within the context of a HumanProof proof seamlessly. I have achieved this compatibility system between Boxes and tactics through defining a zipper [Hue97[Hue97]Huet, GรฉrardFunctional Pearl: The Zipper (1997)Journal of functional programming] structure on Boxes (Section A.2) and then a set of shim operations for soundly converting an underlying TacticState to and from a Box object. The details of this mechanism can be found in Section A.2. It is used to implement some of the moves presented next in Section 3.5, since in some cases the move is the same as its tactic-based equivalent.

3.5. Moves for Box.

Using the framework presented above we can start defining sound moves on Boxes and use Box to actualise the kinds of reasoning discussed in Section 3.1. Many of the moves here will be similar to inference rules that one would find in a usual system, and so I will not cover these ones in great detail. I will also skip many of the soundness proofs, because in Appendix A I instead provide an 'escape hatch' for creating sound moves from tactics in the underlying metavariable-oriented development calculus. Some of these moves are

3.5.1. Simplifying moves

We have the following moves for reducing Boxes, these should be considered as tidying moves.

(3.21).

Reduction moves for Box. These are moves which should always be applied if they can and act as a set of reductions to a box. Note that these are not congruent; for example ๐’ช-reduceโ‚ and ๐’ช-reduceโ‚‚ on ๐’ช (๐’ญ ๐‘’โ‚) (๐’ญ ๐‘’โ‚‚) produce different terminals.

๐’ช-reduceโ‚ :=
โ–ธ ๐‘’
โ‹
...๐‘โ‚‚
โŸผ
โ–ธ ๐‘’
๐’ช-reduceโ‚‚ :=
...๐‘โ‚
โ‹
โ–ธ ๐‘’
โŸผ
โ–ธ ๐‘’
๐’œ-reduce :=

๐‘กโ‚€ :=
โ–ธ ๐‘’
...๐‘
โŸผ
...(โฆƒ๐‘กโ‚€ โ†ฆ ๐‘’โฆ„ ๐‘)
๐’ฏ-reduce :=

?๐‘กโ‚€ : ฮฑ
โ–ธ ๐‘’
โŸผ
โ–ธ ๐‘’
if ?๐‘กโ‚€ โˆ‰ ๐‘’
๐’ช-revertโ‚ :=
...๐‘โ‚
โ‹
...๐‘โ‚‚
โŸผ
...๐‘โ‚‚
๐’ช-revertโ‚‚ :=
...๐‘โ‚
โ‹
...๐‘โ‚‚
โŸผ
...๐‘โ‚

?tโ‚€ : ฮ  (๐‘ฅ : ฮฑ), ฮฒ
...๐‘
โŸผ

tโ‚€ :=
๐‘ฅ : ฮฑ

?tโ‚ : ฮฒ
โ–ธ ?tโ‚
...๐‘

3.5.2. Deleting moves

These are moves that cause a Box to simplify but which are not always 'safe' to do, in the sense that they may lead to a Box which is impossible to solve.

(3.22).

Deletion moves for Box.

๐’ช-revertโ‚ :=
...๐‘โ‚
โ‹
...๐‘โ‚‚
โŸผ
...๐‘โ‚‚
๐’ช-revertโ‚‚ :=
...๐‘โ‚
โ‹
...๐‘โ‚‚
โŸผ
...๐‘โ‚
๐’ฑ-delete :=
๐‘ฅ : ฮฑ := ๐‘’

...๐‘
โŸผ

...(โฆƒ๐‘ฅ โ†ฆ ๐‘’โฆ„ ๐‘)

3.5.3. Lambda introduction

In normal tactics, an intro tactic is used to introduce ฮ -bindersฮ -binders ฮ  (๐‘ฅ : ฮฑ), ฮฒ are the dependent generalisation of the function type ฮฑ โ†’ ฮฒ where the return type ฮฒ may depend on the input value ฮฑ.. That is, if the goal state is โŠข ฮ  (๐‘ฅ : ฮฑ), ฮฒ[๐‘ฅ] the intro tactic will produce a new state (๐‘ฅ : ฮฑ) โŠข ฮฒ[๐‘ฅ]. To perform this, it assigns the target metavariable ?tโ‚ : ฮ  (๐‘ฅ : ฮฑ), ฮฒ[๐‘ฅ] with the expression ฮป (๐‘ฅ : ฮฑ), ?tโ‚‚ where ?tโ‚‚ : ฮฒ[๐‘ฅ] is the new target metavariable with context including the additional local variable ๐‘ฅ : ฮฑ.

The intro move on Box is analogous, although there are some additional steps required to ensure that contexts are preserved correctly. The simplified case simple_intro (3.23), performs the same steps as the tactic version of intro.

(3.23).

A simple variable introduction move. Note that that the new target ?tโ‚‚ is not wrapped in a lambda abstraction because it is abstracted earlier by the โ„ box.

simple_intro :=

?tโ‚ : ฮ  (๐‘ฅ : ฮฑ), ฮฒ
โ–ธ ?tโ‚
โŸผ
๐‘ฅ : ฮฑ

?tโ‚‚ : ฮฒ
โ–ธ ?tโ‚‚

The full version (3.24) is used in the case that the โ„-box is not immediately followed by an ๐’ญ-box. In this case, a conjunctive ๐’œ-box must be created in order to have a separate context for the new (๐‘ฅ : ฮฑ) variable.

(3.24).

The full version of the lambda introduction move. The box on the rhs of โŸผ is an ๐’œ box: ๐’œ (โ„ ๐‘ฅ, ๐’ฏ ?t, ๐’ญ ?tโ‚) tโ‚€ ๐‘.

intro :=

?tโ‚€ : ฮ  (๐‘ฅ : ฮฑ), ฮฒ
...๐‘
โŸผ

tโ‚€ :=
๐‘ฅ : ฮฑ

?tโ‚ : ฮฒ
โ–ธ ?tโ‚
...๐‘

The fact that intro is sound follows mainly from the design of the definitions of โ„: Define ๐‘' to be โ„ (๐‘ฅ : ฮฑ), ๐’ฏ (?tโ‚ : ฮฒ), ๐’ญ ?tโ‚, represented graphically in (3.25). The typing judgement (3.25) follows from the typing rules (3.13).

(3.25).

The judgement that ๐‘' has type ฮ  (๐‘ฅ : ฮฑ), ฮฒ. ฮฒ may possibly depend on ๐‘ฅ.

โŠข
๐‘ฅ : ฮฑ

?tโ‚ : ฮฒ
โ–ธ ?tโ‚
: ฮ  (๐‘ฅ : ฮฑ), ฮฒ

By the definition of a sound move we may assume โŠข (๐’ฏ ?tโ‚€, ๐‘) : ฮณ for some type ฮณ. From the ๐’ฏ typing rule (3.13) we then have [?tโ‚€];โˆ… โŠข ๐‘ : ฮณ. Then it follows from ๐’œ typing (3.13) that โŠข ๐’œ ๐‘' (tโ‚€ : ฮ  (๐‘ฅ : ฮฑ), ฮฒ) ๐‘ : ฮณ where ๐‘' := โ„ (๐‘ฅ : ฮฑ), ๐’ฏ (?tโ‚ : ฮฒ), ๐’ญ ?tโ‚.

3.5.4. Split and cases moves

Here I present some moves for performing introduction and elimination of the โˆง type. The Box version of split performs the same operation as split in Lean: introducing a conjunction. A target ?tโ‚€ : P โˆง Q is replaced with a pair of new targets (?tโ‚,?tโ‚‚). These can be readily generalised to other inductive datatypes with one constructorOne caveat is that the use of โˆƒ requires the use of a non-constructive axiom of choice with this method. This is addressed in Section 3.5.8. In the implementation, these are implemented using the tactic escape-hatch described in Appendix A.

(3.26).

Move for introducing conjunctions.

split :=

?tโ‚€ : P โˆง Q
...๐‘
โŸผ

?tโ‚ : P
?tโ‚‚ : Q
...(โฆƒ?tโ‚€ โ†ฆ โŸจ?tโ‚,?tโ‚‚โŸฉโฆ„ ๐‘)

Similarly we can eliminate a conjunction with cases.

(3.27).

Move for eliminating conjunctions. fst : P โˆง Q โ†’ P and snd : P โˆง Q โ†’ Q are the โˆง-projections. In the implementation; hโ‚€ is hidden from the visualisation to give the impression that the hypothesis hโ‚€ has been 'split' in to hโ‚ and hโ‚‚.

cases :=
hโ‚€ : P โˆง Q

...๐‘
โŸผ
hโ‚€ : P โˆง Q
hโ‚ : P := fst hโ‚€
hโ‚‚ : Q := snd hโ‚€

...๐‘

3.5.5. Induction moves

โˆง-elimination (3.27) from the previous section can be seen as a special case of induction on datatypes. Most forms of dependent type theory use inductive datatypes (See Section 2.2.3) to represent data and propositions, and use induction to eliminate them. To implement induction, in CICCalculus of Inductive Constructions. The foundation (Section 2.1.3) used by Lean 3 and Coq. Inductive datastructures (Section 2.2.3) for the Calculus of Constructions [CH88] were first introduced by Pfenning et al [PP89]. See [Car19 ยง2.6] for the axiomatisation of inductive types within Lean 3's type system.[CH88]Coquand, Thierry; Huet, Gรฉrard P.The Calculus of Constructions (1988)Inf. Comput., [PP89]Pfenning, Frank; Paulin-Mohring, ChristineInductively defined types in the Calculus of Constructions (1989)International Conference on Mathematical Foundations of Programming Semantics, [Car19]Carneiro, MarioLean's Type Theory (2019) each inductive datatype comes equipped with a special constant called the recursor. This paradigm broadens the use of the words 'recursion' and 'induction' to include datastructures that are not recursive.

For example, we can view conjunction A โˆง B : Prop as an inductive datatype with one constructor mk : A โ†’ B โ†’ A โˆง B. Similarly, a disjunctive A โˆจ B has two constructors inl : A โ†’ A โˆจ B and inr : B โ†’ A โˆจ B. Interpreting โ†’ as implication, we recover the basic introduction axioms for conjunction and disjunction. The eliminators for โˆง and โˆจ are implemented using recursors given in (3.28).

(3.28).

Recursors for conjunction and disjunction.

โˆง-rec : โˆ€ (A B C : Prop), (A โ†’ B โ†’ C) โ†’ (A โˆง B) โ†’ C
โˆจ-rec : โˆ€ (A B C : Prop), (A โ†’ C) โ†’ (B โ†’ C) โ†’ (A โˆจ B) โ†’ C

Performing an induction step in a CIC theorem prover such as Lean amounts to the application of the relevant recursor. Case analysis on a disjunctive hypothesis makes for a good example of recursion, the recursor โˆจ-rec : (P โ†’ C) โ†’ (Q โ†’ C) โ†’ (P โˆจ Q) โ†’ C is used. Given a box โ„ (hโ‚€ : P โˆจ Q), ๐‘ where hโ‚€ โŠข ๐‘ โˆถ ฮฑ, the โˆจ-cases move sends this to the box defined in (3.29). This is visualised in (3.30).

(3.29).

Explicit datastructure showing the resulting Box after performing โˆจ-cases on โ„ (hโ‚€ : P โˆจ Q), ๐‘.

๐’œ (โ„ (hโ‚โˆถP), ๐‘โ‚) (๐‘โ‚โˆถP โ†’ ฮฑ) (
๐’œ (โ„ (hโ‚‚โˆถQ), ๐‘โ‚‚) (๐‘โ‚‚โˆถQ โ†’ ฮฑ) (
๐’ญ (โˆจ-rec ๐‘โ‚ ๐‘โ‚‚ hโ‚€)
)
)
where ๐‘โ‚ := โฆƒhโ‚€ โ†ฆ inl hโ‚โฆ„ ๐‘
๐‘โ‚‚ := โฆƒhโ‚€ โ†ฆ inr hโ‚‚โฆ„ ๐‘
(3.30).

Special case of recursion for eliminating โˆจ statements. The right-hand side of โŸผ is simplified for the user, but is represented as a nested set of ๐’œ boxes as explicitly written in (3.29). ๐‘โ‚ and ๐‘โ‚‚ are defined in (3.29).

cases :=
hโ‚€ : P โˆจ Q

...๐‘
โŸผ

hโ‚ : P

...๐‘โ‚
hโ‚‚ : Q

...๐‘โ‚‚

Note that the ๐‘ : Box in (3.30) may contain multiple targets. When the cases move is applied to โ„ (hโ‚€โˆถP โˆจ Q), ๐‘, the resulting Box on the rhs of (3.30) results in two copies of these targets. The implements the design requirement of structural sharing of targets as motivated in Section 3.1.3. Structural sharing is a significant advantage over the goal-state style approach to tactics, where the equivalent cases tactic would have to be applied separately to each goal if there were multiple targets.

This structurally-shared induction step also works on recursive datastructures such as lists and natural numbers, as shown in (3.32).

(3.31).

Recursor for natural numbers. โ„•-rec can be seen to have the same signature as mathematical induction on the natural numbers.

โ„•-rec :
(๐’ž : โ„• โ†’ Type) -- motive
โ†’ (๐’ž 0) -- zero case
โ†’ ((๐‘– : โ„•) โ†’ ๐’ž ๐‘– โ†’ ๐’ž (๐‘– + 1)) -- successor case
โ†’ (๐‘– : โ„•) โ†’ ๐’ž ๐‘–
(3.32).

Induction move on natural numbers. Implemented using the 'escape hatch' detailed in Appendix A. Here, ฮฑ is the result type of ๐‘ (Section 3.4.2). That is, (๐‘›:โ„•) โŠข ๐‘ โˆถ ฮฑ.

induction :=
๐‘› : โ„•

...๐‘
โŸผ


...โฆƒ๐‘› โ†ฆ 0โฆ„๐‘
๐‘› : โ„•
๐‘• : ฮฑ

...โฆƒ๐‘› โ†ฆ ๐‘›+1โฆ„๐‘
(3.33).

Detail on the rhs of (3.32). The signature for โ„•-rec is given in (3.31).

๐’œ (โฆƒ๐‘› โ†ฆ 0โฆ„๐‘) (๐‘โ‚ โˆถ โฆƒ๐‘› โ†ฆ 0โฆ„ฮฑ) (
๐’œ (โ„ (๐‘› โˆถ โ„•), โ„ (๐‘• โˆถ ฮฑ), โฆƒ๐‘› โ†ฆ ๐‘›+1โฆ„๐‘) (๐‘โ‚‚โˆถ โฆƒ๐‘› โ†ฆ ๐‘›+1โฆ„ฮฑ) (
๐’ญ (โ„•-rec (๐‘› โ†ฆ ฮฑ) ๐‘โ‚ ๐‘โ‚‚ ๐‘›)
)
)

In general, finding the appropriate motive ๐’ž for an induction step amounts to a higher order unification problem which was shown to be undecidable [Dow01[Dow01]Dowek, GilesHigher-order unification and matching (2001)Handbook of automated reasoning ยง3]. However in many practical cases ๐’ž can be found and higher-order provers come equipped with heuristics for these cases, an early example being Huet's semidecidable algorithm Rather than reimplementing these heuristics, I implement induction moves on Box by using the 'escape hatch' feature (Section 3.4.4).

3.5.6. Introducing ๐’ช boxes

The purpose of ๐’ช boxes is to enable backtracking and branches on Boxes that enables structural sharing. The G&G prover [GG17[GG17]Ganesalingam, Mohan; Gowers, W. T.A fully automatic theorem prover with human-style output (2017)J. Automated Reasoning] makes use of a similar approach. For example, suppose that we had a target x โˆˆ A โˆช B for some sets A, B. We might have some lemmas of the form P โ†’ x โˆˆ A and Q โ†’ x โˆˆ B but we are not yet sure which one to use. Currently in Lean, if you don't yet know which injection to use, you have to guess and manually backtrack. However there may be some clues on which lemma is correct that only become apparent after applying an injection. Automation usually takes this in to account either through symbol counting or by using a representation of goals such as sequent calculus that avoids this problem. The problem with this is that you can't use the

The ๐’ช box allows us to explore various counterfactuals without having to perform any user-level backtracking (that is, having to rewrite proofs). The primitive move that creates new ๐’ช-boxes is shown in (3.34), this is used to make more 'human-like' moves such as โˆจ-split (3.35).

(3.34).

Move for introducing an ๐’ช-box by duplication.

๐’ช-intro :=
...๐‘
โŸผ
...๐‘
โ‹
...๐‘
(3.35).

Move for introducing an ๐’ช-box by duplication.

โˆจ-intro :=

?๐‘ก : P โˆจ Q
...๐‘
โŸผ

?๐‘ก : P
...๐‘
โ‹

?๐‘ก : Q
...๐‘

3.5.7. Unification under Boxes

Unification is the process of taking a pair of expressions ๐‘™ ๐‘Ÿ : Expr within a joint context ๐‘€;ฮ“ and finding a valid set of assignments of metavariables ฯƒ in ๐‘€ such that (๐‘€ + ฯƒ);ฮ“ โŠข ๐‘™ โ‰ก ๐‘Ÿ. Rather than develop a whole calculus of sound unification for the Box system, I can use the 'escape hatch' tactic compatibility layer developed in Appendix A to transform a sub-Box to a metavariable context and then use the underlying theory of unification used for the underlying development calculus of the theorem prover (in this case Lean). This is the correct approach because unifiers and matchers for theorem provers are usually very well developed in terms of both features and optimisation, so it wouldn't make sense to make a unifier from scratch when the host proof assistant has a perfectly good one already. This approach also has the benefit of dramatically reducing the size of this chapter.

3.5.8. Apply

In textbook proofs of mathematics, often an application of a lemma will also act under โˆƒ binders. For example, let's look at the application of fs ๐‘› being continuous from earlier.

(3.36).

An example lemma hโ‚ to apply. hโ‚ is a proof that fs ๐‘› is continuous.

hโ‚ :
โˆ€ (๐‘ฅ : X) (ฮต : โ„) (hโ‚€ : ฮต > 0),
โˆƒ (ฮด : โ„) (hโ‚ : ฮด > 0),
โˆ€ (๐‘ฆ : X) (hโ‚‚ : dist ๐‘ฅ ๐‘ฆ < ฮด), dist (f ๐‘ฅ) (f ๐‘ฆ) < ฮต

In the example the application of hโ‚ with ๐‘, ฮต, hโ‚ƒ, and then eliminating an existential quantifier ฮด and then applying more arguments y all happens in one step and without much exposition in terms of what ฮด depends on. A similar phenomenon occurs in backwards reasoning. If the target is dist (f ๐‘ฅ) (f ๐‘ฆ) < ฮต, in proof texts the continuity of f will be applied in one step to replace this goal with dist x y < ฮด, where ฮด is understood to be an 'output' of applying the continuity of f.

Contrast this with the logically equivalent Lean tactic script fragment (3.37):

(3.37).

A Lean tactic-mode proof fragment that is usually expressed in one step by a human but which requires two steps in Lean. The show lines can be omitted and are provided for clarity to show what the goal state before and after the obtain and apply steps. The obtain โŸจ_,_,_โŸฉ : ๐‘ƒ tactic creates a new goal t : ๐‘ƒ and then after this goal is solved, performs elimination on

...
show dist (f x) (f y) < ฮต,
obtain โŸจฮด, ฮด_pos, hโ‚โŸฉ : โˆƒ ฮด, ฮด > 0 โˆง โˆ€ y, dist x y < ฮด โ†’ dist (f x) (f y) < ฮต,
apply โ€นcontinuous fโ€บ,
apply hโ‚,
show dist x y < ฮด,
...

In order to reproduce this human-like proof step, we need to develop a theory for considering these 'complex applications'. A further property we desire is that results of the complex application must be stored such that we can recover a natural language write-up to explain it later (e.g., creating "Since f is continuous at x, there is some ฮด...").

The apply subsystem works by performing a recursive descent on the type of the assumption being applied. For example, applying the lemma given in (3.36) to a target ๐‘ก : P will attempt to unify P with dist (f ?๐‘ฅ) (f ?๐‘ฆ) < ?ฮต with new metavariables ?๐‘ฅ ?๐‘ฆ : X, ฮต : โ„. If the match is successful, it will create a new target for each variable in a ฮ -binderNote that โˆ€ is sugar for ฮ . above the matching expression and a new ๐’ฑ-binder for each introduced โˆƒ-variable and each conjunct. These newly introduce nested boxes appear in the same order as they appear in the applied lemma.

An example of applying (3.36) to the target dist (f x) (f y) < ฮต can be seen in (3.38).

(3.38).

An example of applying (3.36) to tโ‚. It produces a set of nested targets in accordance with the structure of the binders in (3.36). Result Boxes are omitted.

apply โ€นcontinuous fโ€บ :
๐‘ฅ ๐‘ฆ : X
ฮต : โ„

?tโ‚ : dist (f ๐‘ฅ) (f ๐‘ฆ) < ฮต
โŸผ
๐‘ฅ ๐‘ฆ : X
ฮต : โ„

?tโ‚‚ : ฮต > 0
ฮด : โ„ := _
hโ‚‚ : ฮด > 0 := _

?tโ‚ƒ : dist ๐‘ฅ ๐‘ฆ < ฮด

One complication with this approach to apply, performing many logical inference steps when applying a lemma in one go. There is a technical caveat with non-projectable structures such as โˆƒ (ฮด : โ„), P. By default, Lean is a non-classical theorem prover, which here amounts to saying that the axiom of choice is not assumed automatically. Without the axiom of choice, it is not generally possible to construct a function ฮต : โˆƒ (๐‘ฅ : ฮฑ), P [๐‘ฅ] โ†’ ฮฑ such that P[ฮต โ„Ž] is true for all โ„Ž : โˆƒ (๐‘ฅ : ฮฑ), P. To

This apply system can be used for both forwards and backwards reasoning moves. Above deals with the backwards case, in the forwards case, the task is reversed, with now a variable bound by a ฮ -binder being the subject to match against the forwards-applied hypothesis.

3.6. Natural language generation of proofs

In this section I will detail how the above box architecture is used to produce natural language writeups as the proof progresses. The general field is known as Natural Language Generation (NLG). You can find a background survey of NLG both broadly and within the context of generating proofs in Section 2.8.

In this section I will lean on the work of Ganesalingam, who in his thesis [Gan10[Gan10]Ganesalingam, MohanThe language of mathematics (2010)] has specified a working theory of the linguistics of natural language mathematics. As well as generating a formally verifiable result of a proof, I also extend on G&G by providing some new mechanisms for converting Lean predicates and typeclasses in to English language sentences. That is, in the implementation of the Robot theorem prover, many natural language constructs such as " is a metric space" were hard-coded in to the system. In this work I provide a general framework for attaching verbalisations of these kinds of statements to typeclasses and predicates within Lean. I also extend on the work by making the resulting write-up interactive; emitting a partial proof write-up if the proof-state is not yet solved and also inspecting the natural language write-up through the widgets system are possible. In contrast G&G's output was a static file.

The goal of this section is to demonstrate that the Box architecture above is representative of human-like reasoning by constructing natural language writeups of the proofs created using Boxes. As such the NLG used here is very simple compared to the state of the art and doesn't make use of any modern techniques such as deep learning. The output of this system is evaluated by real, human mathematicians in Chapter 6. An example of a proof generated by the system is shown below in Output 3.39. There are some challenges in converting a Box proof to something that reads like a mathematical proof that I will detail here.

Output 3.39.

Output from the HumanProof natural language write-up system for a proof that the composition of continuous functions is continuous.

Let , and be metric spaces, let be a function and let be a function . Suppose is continuous and is continuous. We need to show that is continuous. Let and let . We must choose such that . Since is continuous, there exists a such that whenever . Since is continuous, there exists a such that whenever . Since , we are done by choosing to be .

3.6.1. Overview

The architecture of the NLG component is given in Figure 3.40. The design is similar to the standard architecture discussed in Section 2.8.1. In Section 3.1.2 I explained the decision to design the system to permit only a restricted set of moves on a Box representing the goal state of the prover. To create the natural language write-up from these moves, each move also emits an Act object. This is an inductive datatype representing the kind of move that occurred. So for example there is an Intro : List Binder โ†’ Act that is emitted whenever the intro move is performed, storing the list of binders that were introduced. A list of these Acts is held on the state monad for the interactive proof session. This list of acts is then fed to a micro-planner, which converts the list of acts to an abstract representation of sentencesSometimes referred to as a phrase specification. These sentences are then converted to a realised sentence with the help of Run which is a form of S-expression [McC60[McC60]McCarthy, JohnRecursive functions of symbolic expressions and their computation by machine, Part I (1960)Communications of the ACM] containing text and expressions for interactive formatting. This natural language proof is then rendered in the output window using the widgets system (Chapter 5).

Figure 3.40.

Overview of the pipeline for the NLG component of HumanProof. A Box has a series of moves performed upon it, each producing an instance of Act, an abstract representation of what the move did. A list of all of the Acts from the session is then converted in to a list of sentences, which is finally converted to an S-expression-like structure called Run. Compare this with the standard architecture given in Figure 2.30; the main difference being that the macroplanning phase is performed by the choice of moves performed on boxes as detailed in Section 3.5.

3.6.2. Interlude: Grice's laws of implicature

One resource that has proven useful in creating human-like proofs is the work of the Grice on implicature in linguistics [Gri75[Gri75]Grice, Herbert PLogic and conversation (1975)Speech acts]. To review, Grice states that there is an unwritten rule in natural languages that one should only provide as much detail as is needed to convey the desired message. For example, the statement "I can't find my keys" has the implicature "Do you know where my keys are?", it implies that the keys may have been lost at the current location and not in a different part of town and so on. If superfluous detail is included, the reader will pick this up and try to use it to infer additional information. Saying "I can't find my keys at the moment" interpreted literally has the same meaning as "I can't find my keys", but implicitly means that I have only just realised the key loss or that I will be able to find them soon. Grice posits four maxims that should be maintained in order for a sentence or phrase to be useful:

  1. Quantity The contribution should contain no more or less than what is required. Examples: "Since and is prime, ". "Let be a positive real such that ."

  2. Quality Do not say things for which you don't have enough evidence or things that are not true. An example here would be a false proof.

  3. Relation The contributed sentence should be related to the task at hand. Example; putting a true but irrelevant statement in the middle of the proof is bad.

  4. Manner The message should avoid being obscure, ambiguous and long-winded.

Mathematical texts are shielded from the more oblique forms of implicature that may be found in general texts, but Grice's maxims are still important to consider in the construction of human-readable proofs and serve as a useful rule-of-thumb in determining when a generated sentence will be jarring to read.

With respect to the quantity maxim, it is important to remember also that what counts as superfluous detail can depend the context of the problem and the skill-level of the reader. For example, one may write:

Suppose and are open subsets of . Since is continuous, is open

A more introductory text will need to also mention that is a topological space and so is open. Generally these kinds of implicit lemma-chaining can become arbitrarily complex, but it is typically assumed that these implicit applications are entirely familiar to the reader. Mapping ability level to detail is not a model that I will attempt to write explicitly here. One simple way around this is to allow the proof creator to explicitly tag steps in the proof as 'trivial' so that their application is suppressed in the natural language write-up. Determining this correct level of detail may be a problem in which ML models may have a role to play.

3.6.3. Microplanning symbolic mathematics

From a linguistic perspective, a remarkable property of mathematical texts is the interlacing of mathematical symbols and natural language. In the vast majority of cases, each symbolic construct has a natural language equivalent (else verbalising that symbol in conversation would be difficult). For example: "" versus " plus ". Sometimes multiple verbalisations are possible: can be " implies " or " whenever ". Sometimes the the symbolic form of a statement is not used as frequently: " is prime" versus . In making text flow well, the decision of when to move between symbolic and textual renderings of a mathematical proof is important. The rule-of-thumb that I have arrived at is to render the general flow of the proof's reasoning using text and to render the objects that are being reasoned about using symbols. The idea here is that one should be able to follow the rough course of argument whilst only skimming the symbolic parts of the proof.

3.6.4. Microplanning binders with class predicate collections

In mathematics, it is common that a variable will be introduced in a sentence and then referenced in later sentences. For example, one will often read sentences such as "Let be a metric space and let and be points in ". This corresponds to the following telescopeA telescope is a list of binders where the type of a binder may depend on variables declared ealier in the list. Telescopes are equivalent to a well-formed context (see Section 2.1.3) but the term telescope is also used to discuss lists of binders that appear in expressions such as lambda and forall bindings. of binders: (X : Type) (_ : metric_space X) (x y : X). These effectively act as 'linguistic variable binders'.

In this subsection I will highlight how to convert lists of binders to natural language phrases of this form. To the best of my knowledge this is an original contribution so I will explain this mechanism in more detail. Related work to the approach here is discussed in Section 3.6.4.1. Table 3.41 presents some examples of this process.

Table 3.41.

Examples of generating natural language renderings of variable introductions from type-theory telescopes. Square brackets on a binder such as [group G] denote a typeclass binder. This typeclass binder is equivalent to the binder (๐”ค : group G) where the binder name ๐”ค is omitted. Typeclasses were first introduced by Hall et al for use with the Haskell programming language [HHPW96]. Typeclasses are used extensively in the Lean 3 theorem prover. A description of their implementation can be found at [MAKR15 ยง2.4].

Telescope Generated text
(X : Type) [metric_space X] (๐‘ฅ ๐‘ฆ : X) Let X be a metric space and let ๐‘ฅ and ๐‘ฆ be points in X.
(G : Type) [group G] (๐‘ฅ ๐‘ฆ : G) Let G be a group and let ๐‘ฅ and ๐‘ฆ be elements of G.
(G : Type) [group G] (H : set G) (hโ‚ : subgroup.normal G H) Let G be a group and H be a normal subgroup of G.
(๐‘Ž ๐‘ : โ„ค) (hโ‚ : coprime ๐‘Ž ๐‘) Let ๐‘Ž and ๐‘ be coprime integers.
(๐‘“ : X โ†’ Y) (hโ‚ : continuous ๐‘“) Let ๐‘“ : X โ†’ Y be a continuous function.
(T : Type) [topological_space T] (U : set T) (hโ‚ : open U) Let T be a topological space and let U be an open set in T.
(ฮต : โ„) (hโ‚ : ฮต > 0) Let ฮต > 0.

[HHPW96]Hall, Cordelia V; Hammond, Kevin; et al.Type classes in Haskell (1996)ACM Transactions on Programming Languages and Systems (TOPLAS)[MAKR15]de Moura, Leonardo; Avigad, Jeremy; et al.Elaboration in Dependent Type Theory (2015)CoRRThese variable introduction sentences in Table 3.41 take the role of a variable binder for mathematical discourse, this variable is then implicitly 'in scope' until its last mention in the text. Some variables introduced in this way can remain in scope for an entire book. For example the choice of underlying field k in a book on linear algebra. As Ganesalingam notes [Gan10[Gan10]Ganesalingam, MohanThe language of mathematics (2010) ยง2.5.2], "If mathematicians were not able to use variables in this way, they would need to write extremely long sentences!"

Let's frame the problem as follows: take as input a telescope of binders (e.g. [(๐‘Ž : โ„ค), (๐‘ : โ„ค), (hโ‚ : coprime ๐‘Ž ๐‘)]) and produce a 'variable introduction text' string as shown in the above table. The problem involves a number of challenges:

  • There is not a 1-1 map between binders and pieces of text: in "Let ๐‘Ž, ๐‘ be coprime", the binder hโ‚ : coprime ๐‘Ž ๐‘ is not named but instead treated as a property of ๐‘Ž and ๐‘.

  • The words that are used to describe a variable can depend on which typeclass [HHPW96]See the caption of Table 3.41 for more information on typeclasses. their type belongs to. For instance we write "let ๐‘ฅ and ๐‘ฆ be points" or "let ๐‘ฅ and ๐‘ฆ be elements of G" depending on whether the type of ๐‘ฅ and ๐‘ฆ is an instance of group or metric_space.

  • Compare "๐‘ฅ and ๐‘ฆ are prime" versus "๐‘ฅ and ๐‘ฆ are coprime". The first arises from (๐‘ฅ ๐‘ฆ : โ„•) (hโ‚ : prime ๐‘ฅ) (hโ‚‚ : prime ๐‘ฆ) whereas the second from (๐‘ฅ ๐‘ฆ : โ„•) (hโ‚ : coprime ๐‘ฅ ๐‘ฆ). Hence we need to model the adjectives "prime" and "coprime" as belonging to distinct categories.

To solve this I introduce a schema of class predicate collections. Each binder in the input telescope is converted to two pieces of data; the subject expression ๐‘ฅ and the class predicate ๐‘๐‘; which is made from one of the following constructors.

  • adjective: "continuous", "prime", "positive"

  • fold_adjective: "coprime", "parallel"

  • symbolic_postfix: "โˆˆ A", "> 0", ": X โ†’ Y"

  • class_noun: "number", "group", "points in X", "elements of G", "function", "open set in T"

  • none: a failure case. For example if the binder is just for a proposition that should be realised as an assumption instead of a predicate about the binder.

The subject expression and the class predicate for a given binder in the input telescope are assigned by consulting a lookup table which pattern-matches the binder type expressions to determine the subject expression and any additional parameters (for example T in "open set in T"). Each pair โŸจ๐‘ฅ, ๐‘๐‘โŸฉ is mapped to โŸจ[๐‘ฅ], [๐‘๐‘]โŸฉ : List Expr ร— List ClassPredicate. Call this a class predicate collection (CPC). The resulting list of CPCs is then reduced by aggregating [DH93[DH93]Dalianis, Hercules; Hovy, EduardAggregation in natural language generation (1993)European Workshop on Trends in Natural Language Generation] adjacent pairs of CPCs according to the in