diff --git a/Reference Manual/lisa.pdf b/Reference Manual/lisa.pdf index 7486581dd85b54c97ded342a5634862f51e17253..bac2001485c9e1952e6c034027af0b0533f7934b 100644 Binary files a/Reference Manual/lisa.pdf and b/Reference Manual/lisa.pdf differ diff --git a/Reference Manual/lisa.tex b/Reference Manual/lisa.tex index 2901e2758602ddf2c188c0c8dad7c5c2a8eb7a81..bc41e7b483b12491427751244513a415132ece90 100644 --- a/Reference Manual/lisa.tex +++ b/Reference Manual/lisa.tex @@ -56,6 +56,7 @@ escapeinside={(*@}{@*)} \begin{document} \maketitle \chapter*{Introduction} + This document aims to give a complete documentation on LISA. Tentatively, every chapter and section will explain a part or concept of LISA, and explains both its implementation and its theoretical foundations \cite{DBLP:conf/tacas/GuilloudK22}. \input{part1.tex} diff --git a/Reference Manual/part1.tex b/Reference Manual/part1.tex index 996c8ede1db202dab1a56cb4fda6000b99ebd2c7..dd3677f0b52f19b05a911256bcebac784fbaf396 100644 --- a/Reference Manual/part1.tex +++ b/Reference Manual/part1.tex @@ -6,18 +6,13 @@ \newcommand\LambdaTF{\operatorname{LambdaTermFormula}} \newcommand\LambdaFF{\operatorname{LambdaFormulaFormula}} - - - - - - \part{Reference Manual} \label{part:manual} \chapter{LISA's trusted code: The Kernel} \label{chapt:kernel} LISA's kernel is the starting point of LISA, formalising the foundations of the whole theorem prover. It is the only trusted code base, meaning that if it is bug-free then no further erroneous or malicious code can violate the soundness property and prove invalid statements. Hence, the two main goals of the kernel are to be efficient and trustworthy. + LISA's foundations are based on very traditional (in the mathematical community) foundational theory of all mathematics: \textbf{First Order Logic}, expressed using \textbf{Sequent Calculus} (augmented with schematic symbols), with axioms of \textbf{Set Theory}. Interestingly, while LISA is built with the goal of using Set Theory, the kernel is actually theory-agnostic and is sound to use with any other set of axioms. Hence, we defer Set Theory to chapter~\ref{chapt:settheory}. @@ -44,10 +39,12 @@ A term is made of a term label and a list of children, whose length must be equa A constant label of arity $0$ is sometimes called just a constant, and a schematic label of arity $0$ a variable. We define the shortcut $$\Var(x) \equiv \operatorname{SchematicTermLabel}(x, 0)$$ \end{defin} + As the definition states, we have two kinds of function symbols: \textit{Constant} ones and \textit{Schematic} ones. Constant labels represent a fixed function symbol in some language, for example the addition ``+'' in Peano arithmetic. Schematic symbols on the other hand, are uninterpreted --- they can represent any possible term and hence can be substituted by any term. Their use will become clearer in the next section when we introduce the concept of deductions. Moreover, variables, which are schematic terms of arity 0, can be bound in formulas, as we explain below. \footnote{In a very traditional presentation of first order logic, we would only have variables, i.e. schematic terms of arity 0, and schematic terms of higher arity would only appear in second order logic. We defer to Part~\ref{part:theory} Section~\ref{sect:theoryfol} the explanation of why our inclusion of schematic function symbols doesn't fundamentally move us out of First Order Logic.} + \begin{defin}[Formulas] The set of Formulas $\mathcal{F}$ is defined similarly: \begin{equation} @@ -64,6 +61,7 @@ Where $\mathcal{L}_{Predicate}$ is the set of \textit{predicate labels}: \mid & \operatorname{SchematicPredicateLabel}(\textnormal{Id}, \textnormal{Arity}) \end{split} \end{equation} + and $\mathcal{L}_{Connector}$ is the set of \textit{connector labels}: \begin{equation} \begin{split} @@ -71,6 +69,7 @@ and $\mathcal{L}_{Connector}$ is the set of \textit{connector labels}: \mid & \operatorname{SchematicConnectorLabel}(\textnormal{Id}, \textnormal{Arity}) \end{split} \end{equation} + A formula can be constructed from a list of terms using a predicate label $${\leq}(x, 7)$$ or from a list of smaller formulas using a connector label @@ -98,7 +97,9 @@ $$ $$ \end{defin} + In this document, as well as in the code documentation, we often write terms and formula in a more conventional way, generally hiding the arity of labels and representing the label with its identifier only, preceded by an interrogation mark ? if the symbol is schematic. When the arity is relevant, we write it with an superscript, for example: + $$ f^3(x,y,z) \equiv \operatorname{Fun}(f, 3)(\List(\Var(x), \Var(y), \Var(z))) $$ @@ -107,11 +108,14 @@ $$ \forall x. \phi \equiv \operatorname{Binder}(\forall, \Var(x), \phi) $$ We also use other usual representations such as symbols in infix position, omitting parenthesis according to usual precedence rules, etc. + Finally, note that we use subscript to emphasize that a variable is possibly free in a term or formula: + $$ t_{x,y,z}, \phi_{x,y,z} $$ + \paragraph{Convention} Throughout this document, and in the code base, we adopt the following conventions: We use $r$, $s$, $t$, $u$ to denote arbitrary terms, $a$, $b$, $c$ to denote constant term symbols of arity $0$ and $f$, $g$, $h$ to denote term symbols of arity non-$0$. We precede those with an interogation mark, such as $?f$ to denote schematic symbols. Moreover, we also use $x$, $y$, $z$ to denote variables (schematic terms of order $0$). For formulas, we use greek letters such as $\phi$, $\psi$, $\tau$ to denote arbitrary formula, $\nu$, $\mu$ to denote formula variables. We use capital letters like $P$, $Q$, $R$ to denote predicate symbols, preceding them similarly with an interrogation mark $?$ for schematic predicates. Schematic connectors are rarer, but when they appear, we precede them by 2 interrogation marks, for example $??c$. Sets or sequences of formula are denoted with capital greek letters $\Pi$, $\Sigma$, $\Gamma$, $\Delta$, etc. @@ -133,11 +137,13 @@ $$ with any fresh variable $z$ (which is not free in $r$ and $\phi$) otherwise. \end{defin} + This definition of substitution is justified by the notion of alpha equivalence: two formulas which are identical up to renaming of bound variables are considered equivalent. In practice, this means that the free variables inside $r$ will never get caught when substituted. We can now define \enquote{lambda terms}. \begin{defin}[Lambda Terms] A lambda term is a meta expression (meaning that it is not part of FOL itself) consisting in a term with ``holes'' that can be filled by other terms. This is represented with specified variables as arguments, similar to lambda calculus. For example, for a functional term with two arguments, we write + $$ L = \Lambdaa(\Var(x), \Var(y))(t_{x,y}) $$ @@ -150,6 +156,7 @@ $$ $$ They are useful because as variables can be substituted by terms, schematic terms labels of arity greater than 0 can be substituted by such functional terms. As the definition of such substitution is rather convoluted to describe, we prefer to show examples and redirect the reader to the source code of LISA for a technical definition. \footnote{Note that in lambda calculus, this would simply be iterated beta-reduction.} + \begin{ex}[Functional terms substitution in terms] \begin{center} \begin{tabular}{|c|r c l|c|} @@ -167,6 +174,7 @@ $?f(x, x+y)$ & $?f$ & $\rightarrow$ & $\lambda x.y. \cos(x-y)*y$ & $\cos(x-(x+y) \end{center} \end{ex} + The definition extends to substitution of schematic terms inside formulas, with capture free substitution for bound variables. For example: \begin{ex}[Functional terms substitution in formulas] @@ -186,6 +194,7 @@ $\exists y. ?f(y) \leq ?f(5)$ & $?f$ & $\rightarrow$ & $\lambda x. x+y$ & $\exis \end{ex} Note that if the lambda expression contains free variables (such as $y$ in the last example), then appropriate alpha-renaming of variables may be needed. + We similarly define functional formulas, except that these can take either term arguments of formulas arguments. Specifically, we use $\LambdaTT$, $\LambdaTF$, $\LambdaFF$ to indicate functional expressions that take terms or formulas as arguments and return a term or formula. \begin{ex}[Typical functional expressions] @@ -200,12 +209,15 @@ $\LambdaFF(\FormulaVar(\nu), \FormulaVar(\mu))$ & $=$ & $\lambda \nu.\mu. \nu \l \end{center} \end{ex} + Note that in the last case, we use $\FormulaVar$ to represent the arguments of the lambda formula. Substitution of functional formulas is completely analogous to (capture free!) substitution of functional terms. + \subsection{The Equivalence Checker} \label{subs:equivalencechecker} + While proving theorems, trivial syntactical transformations such as $p\land q \equiv q\land p$ significantly increase the length of proofs, which is desirable neither for the user nor the machine. Moreover, the proof checker will very often have to check whether two formulas that appear in different sequents are the same. Hence, instead of using pure syntactical equality, LISA implements a powerful equivalence checker able to detect a class of equivalence-preserving logical transformations. As an example, two formulas $p\land q$ and $q\land p$ would be naturally treated as equivalent. For soundness, the relation decided by the algorithm should be contained in the $\Longleftrightarrow$ ``if and only if'' relation of first order logic. It is well known that this relationship is in general undecidable however, and even the $\Longleftrightarrow$ relation for propositional logic is coNP-complete. So, for practicality, we need a relation that is efficiently computable. @@ -229,6 +241,7 @@ Moreover, the implementation in LISA also takes into account symmetry and reflex L11: & $\exists ! x. P = \exists y. \forall x. (x=y) \leftrightarrow P$ & \\ \end{tabular} \ + \caption{Laws LISA's equivalence checker automatically accounts for. LISA's equivalence-checking algorithm is complete (and log-linear time) with respect to laws L1-L11 and L1'-L8'.} \label{tab:OCBSL} @@ -254,6 +267,7 @@ A sequent $\phi \vdash \psi$ is logically but not conceptually equivalent to a s A deduction rule, also called a proof step, has (in general) between zero and two prerequisite sequents (which we call \textit{premises} of the rule) and one conclusion sequent, and possibly take some arguments that describe how the deduction rule is applied. The basic deduction rules used in LISA are shown in Figure~\ref{fig:deduct_rules_1}. + Since we work on first order logic with equality and accept axioms, there are also rules for equality reasoning, which include reflexivity of equality. Moreover, we include equal-for-equal and equivalent-for-equivalent substitutions in Figure~\ref{fig:deduct_rules_2}. While those substitution rules are deduced steps, and hence could technically be omitted, simulating them can sometimes take a high number of steps, so they are included as base steps for efficiency. There are also some special proof steps used to organise proofs, shown in Figure~\ref{fig:deduct_rules_3}. @@ -411,6 +425,7 @@ There are also some special proof steps used to organise proofs, shown in Figure \end{tabular} \end{center} + \caption{Additional deduction rules for substitution and instantiation.} \label{fig:deduct_rules_2} \end{figure} @@ -445,6 +460,7 @@ There are also some special proof steps used to organise proofs, shown in Figure \end{figure} \newpage \subsection{Proofs} + Proof steps can be composed into a directed acyclic graph. The root of the proof shows the conclusive statement, and the leaves are assumptions or tautologies (instances of the \texttt{Hypothesis} rule). Figure~\ref{fig:exampleProof} shows an example of a proof tree for Pierce's Law in strict Sequent Calculus. \begin{figure} @@ -464,6 +480,7 @@ Proof steps can be composed into a directed acyclic graph. The root of the proof \RightLabel{\texttt { RightImplies}} \UnaryInfC{$ \vdash ((\phi \to \psi) \to \phi) \to \phi$} \DisplayProof + \caption{A proof of Pierce's law in Sequent Calculus. The bottommost sequent (root) is the conclusion.} \label{fig:exampleProof} \end{figure} @@ -471,6 +488,7 @@ Proof steps can be composed into a directed acyclic graph. The root of the proof In the Kernel, proof steps are organised linearly, in a list, to form actual proofs. Each proof step refers to its premises using numbers, which indicate the place of the premise in the proof. Moreover, proofs are conditional: they can carry an explicit set of assumed sequents, named ``\lstinline{imports}'', which give some starting points to the proof. Typically, these imports will contain previously proven theorems, definitions, or axioms (More on that in section~\ref{sect:TheoremsAndTheories}). For a proof step to refer to an imported sequent, one uses negative integers. $-1$ corresponds to the first sequent of the import list of the proof, $-2$ to the second, etc. + Formally, a proof is a pair made of a list of proof steps and a list of sequents: $$ \lstinline{Proof(steps:List[ProofStep], imports:List[Sequent])} @@ -503,6 +521,7 @@ In LISA, a proof object has no guarantee to be correct. It is perfectly possible \item Every proof step must be correctly constructed, with the bottom sequent correctly following from the premises by the type of the proof step and its arguments. \end{enumerate} + Given some proof $p$, the proof checker will verify these points. For most proof steps, this typically involve verifying that the premises and the conclusion match according to a transformation specific to the deduction rule. Note that for most cases where there is an intuitive symmetry in arguments, such as \texttt{RightAnd} or \texttt{LeftSubstIff} for example, permutations of those arguments don't matter. Hence, most of the proof checker's work consists in verifying that some formulas, or subformulas thereof, are identical. This is where the equivalence checker comes into play. By checking equivalence rather than strict syntactic equality, a lot of steps become redundant and can be merged. That way, \texttt{LeftAnd}, \texttt{RightOr}, \texttt{LeftIff} become instances of the \texttt{Weakening} rules, and \texttt{RightImplies} an instance of \texttt{RightAnd}. @@ -510,15 +529,18 @@ Hence, most of the proof checker's work consists in verifying that some formulas \texttt{LeftNot}, \texttt{RightNot}, \texttt{LeftImplies}, \texttt{RightImplies}, \texttt{LeftRefl}, \texttt{RightRefl}, \texttt{LeftExistsOne}, \texttt{RightExistsOne} can be omitted altogether. This gives an intuition of how useful the equivalence checker is to cut proof length. It also combines very well with substitution steps. While most proof steps are oblivious to formula transformations allowed by the equivalence checker, they don't allow transformations of the whole sequent: to easily rearrange sequents according to the sequent semantics (\ref{eq:SequentSemantic}), one should use the \texttt{Rewrite} step. + The proof checking function will output a \textit{judgement}: $$\lstinline{SCValidProof(proof: SCProof)}$$ or $$\lstinline{SCInvalidProof(proof: SCProof, path: Seq[Int], message: String)}$$ + \lstinline{SCInvalidProof}{} indicates an erroneous proof. The second argument point to the faulty proofstep (through subproofs), and the third argument is an error message hinting towards why the step is faulty. \section{Theorems and Theories} \label{sect:TheoremsAndTheories} + In mathematics as a discipline, theorems don't exist in isolation. They depend on some agreed upon set of axioms, definitions, and previously proven theorems. Formally, theorems are developed within theories. A theory is defined by a language, which contains the symbols allowed in the theory, and by a set of axioms, which are assumed to hold true within it. In LISA, a \lstinline{theory}{} is a mutable object that starts as the pure theory of predicate logic: It has no known symbols and no axioms. Then we can introduce into it elements of Set Theory (symbols $\in$, $\emptyset$, $\bigcup$ and set theory axioms, see Chapter~\ref{chapt:settheory}) or of any other theory. @@ -530,8 +552,10 @@ To conduct a proof inside a \lstinline{Theory}{}, using its axioms, the proof sh \label{subs:definitions} The user can also introduce definitions in the \lstinline{Theory}{}. LISA's kernel allows to define two kinds of objects: Function (or Term) symbols and Predicate symbols. It is important to remember that in the context of Set Theory, function symbols are not the usual mathematical functions and predicate symbols are not the usual mathematical relations. Indeed, on one hand a function symbol defines an operation on all possible sets, but on the other hand it is impossible to use the symbol alone, without applying it to arguments, or to quantify over function symbol. + Actual mathematical functions on the other hand, are proper sets which contains the graph of a function on some domain. Their domain must be restricted to a proper set, and it is possible to quantify over such set-like functions or to use them without applications. These set-like functions are represented by constant symbols. For example ``$f$ is derivable'' cannot be stated about a function symbol. We will come back to this in Chapter~\ref{chapt:settheory}, but for now let us remember that (non-constant) function symbols are suitable for intersection ($\bigcap$) between sets but not for, say, the Riemann $\zeta$ function. + \begin{figure} A definition in LISA is one of those two kinds of objects: \begin{lstlisting}[frame=single] @@ -575,6 +599,7 @@ Figure \ref{fig:justifications} shows the types of justification in a theory (Th { \def\arraystretch{4} + \begin{figure}[hp] % Justifications: \begin{center} @@ -624,6 +649,7 @@ FunctionDefinition( \\ %\hline \end{tabular} + \caption{The different types of justification in a \lstinline{Theory}{} object.} \label{fig:justifications} \end{center} @@ -784,18 +810,13 @@ This feature is under active development. \subsection{Writing theory files} LISA provides a canonical way of writing and organizing Kernel proofs by mean of a set of utilities and a DSL made possible by some of Scala 3's features such as string interpolation, extension and implicits. -% this exact sentence appears a section above, commenting: -% This is especially directed to people who want to build understanding and intuition regarding formal proofs in Sequent Calculus. The way to write a new theory file to mathematical development is: \begin{lstlisting}[language=Scala, frame=single] object MyTheoryName extends lisa.Main { } \end{lstlisting} -and that's it! To write a theorem, the recommended syntax is -% if it's recommended is it really for example? -% (for example) -: +and that's it! To write a theorem, the recommended syntax is: \begin{lstlisting}[language=Scala, frame=single] object MyTheoryName extends lisa.Main { @@ -838,7 +859,9 @@ object MyTheoryName extends lisa.Main { } \end{lstlisting} -It is important to note that when multiple such files are developed, they all use the same underlying \lstinline{RunningTheory}{}. This makes it possible to use results proved previously by means of a simple \lstinline{import}{} statement as one would import a regular object. Similarly, one should also import as usual automation and tactics developed alongside. It is expected in the medium term that \lstinline{lisa.Main} will come with basic automation. + +======= +It is important to note that when multiple such files are developed, they all use the same underlying \lstinline{RunningTheory}{}. This makes it possible to use results proved previously by means of a simple \lstinline{import}{} statement as one would import a regular object. Similarly, one should also import as usual automation and tactics developed alongside. It is expected in the medium term that \lstinline{lisa.Main}{} will come with basic automation. To check the result of a developed file, and verify that the proofs contain no error, it is possible to run such a library object. % specify which object @@ -908,6 +931,7 @@ Unordered Pair constant & $(\cdot, \cdot )$ & \lstinline$pair(s,t)$ \\ Power Set function & $\mathcal P$ & \lstinline$powerSet(s)$ \\ Set Union/Flatten function & $\bigcup$ & \lstinline$union(x)$ \\ \end{tabular} + \caption{The basic symbols of ZF.} \label{fig:symbolszf} \end{center} diff --git a/Reference Manual/part2.tex b/Reference Manual/part2.tex index edfde3dafaf4ab1c6294e322385f4da2fd34b578..409b346a7d90bd94b02146341773f8348a6ca0dc 100644 --- a/Reference Manual/part2.tex +++ b/Reference Manual/part2.tex @@ -8,6 +8,7 @@ An extension by definition is the formal way of introducing new symbols in a mathematical theory. Theories can be extended into new ones by adding new symbols and new axioms to it. We're interested in a special kind of extension, called \textit{conservative extension}. \begin{defin}[Conservative Extension] + A theory $\mathcal{T}_2$ is a conservative extension of a theory $\mathcal{T}_1$ if: \begin{itemize} \item $\mathcal{T}_1 \subset \mathcal{T}_2$ @@ -29,6 +30,7 @@ Moreover, in that case we require that $$ \exists ! y. \phi_{y, x_1,...,x_k} $$ + is a theorem of $\mathcal{T}_1$. \end{itemize} \end{itemize} @@ -38,7 +40,9 @@ We also say that a theory $\mathcal{T}_k$ is an extension by definition of a the For function definition, it is common in logic textbooks to only require the existence of $y$ and not its uniqueness. The axiom one would then obtain would only be $\phi[f(x_1,...,x_n)/y]$ This also leads to conservative extension, but it turns out not to be enough in the presence of axiom schemas (axioms containing schematic symbols). \begin{lemma} + In ZF, an extension by definition without uniqueness doesn't necessarily yield a conservative extension if the use of the new symbol is allowed in axiom schemas. + \end{lemma} \begin{proof} In ZF, consider the formula $\phi_c := \forall x. \exists y. (x \neq \emptyset) \implies y \in x$ expressing that nonempty sets contain an element, which is provable in ZFC. @@ -54,6 +58,7 @@ For the definition with uniqueness, there is a stronger result than only conserv \begin{defin} A theory $\mathcal{T}_2$ is a fully conservative extension over a theory $\mathcal{T}_1$ if: \begin{itemize} + \item it is conservative, and \item for any formula $\phi_2$ with free variables $x_1, ..., x_k$ in the language of $\mathcal{T}_2$, there exists a formula $\phi_1$ in the language of $\mathcal{T}_1$ with free variables among $x_1, ..., x_k$ such that $$\mathcal{T}_2 \vdash \forall x_1...x_k. (\phi_1 \leftrightarrow \phi_2)$$ @@ -64,6 +69,7 @@ An extension by definition with uniqueness is fully conservative. \end{thm} The proof is done by induction on the height of the formula and isn't difficult, but fairly tedious. \begin{thm} + If an extension $\mathcal{T}_2$ of a theory $\mathcal{T}_1$ with axiom schemas is fully conservative, then for any instance of the axiom schemas containing a new symbol $\alpha$, $\Gamma \vdash \alpha$ where $\Gamma$ contains no axiom schema instantiated with new symbols. \end{thm} diff --git a/build.sbt b/build.sbt index aa429a01bf5e94e0390013c7fa043c8bd31ed36d..822623420a3d33dba583baf82b4466cfbe6a1d38 100644 --- a/build.sbt +++ b/build.sbt @@ -17,7 +17,7 @@ inThisBuild( ) val scala2 = "2.13.8" -val scala3 = "3.1.3" +val scala3 = "3.2.1-RC1-bin-20220619-4cb967f-NIGHTLY" val commonSettings2 = Seq( scalaVersion := scala2 @@ -26,11 +26,14 @@ val commonSettings3 = Seq( scalaVersion := scala3, scalacOptions ++= Seq( "-language:implicitConversions", + // "-source:future", re-enable when liancheng/scalafix-organize-imports#221 is fixed "-old-syntax", "-no-indent" + ), libraryDependencies += "org.scalatest" %% "scalatest" % "3.2.10" % "test", + libraryDependencies += "org.scala-lang.modules" %% "scala-parser-combinators" % "2.1.1", Test / parallelExecution := false ) @@ -50,8 +53,8 @@ lazy val root = Project( .settings( version := "0.1" ) - .dependsOn(kernel, withTests(utils), theories, tptp) // Everything but `examples` - .aggregate(kernel, utils, theories, tptp) // To run tests on all modules + .dependsOn(kernel, withTests(utils), theories, tptp, front) // Everything but `examples` + .aggregate(kernel, utils, theories, tptp, front) // To run tests on all modules lazy val kernel = Project( id = "lisa-kernel", @@ -66,6 +69,7 @@ lazy val utils = Project( id = "lisa-utils", base = file("lisa-utils") ) + .settings(commonSettings3) .dependsOn(kernel) .dependsOn(silex) @@ -88,6 +92,14 @@ lazy val tptp = Project( ) .dependsOn(withTests(utils)) +lazy val front = Project( + id = "lisa-front", + base = file("lisa-front"), +) + .settings(commonSettings3) + .dependsOn(kernel, utils, theories) + + lazy val examples = Project( id = "lisa-examples", base = file("lisa-examples") diff --git a/lisa-examples/src/main/scala/Example.scala b/lisa-examples/src/main/scala/Example.scala index 57fee33e2167989fbe243ee11d5ed264eadb9d01..d494a5222e25a1e35b95a2a19aabf47ff7ff342a 100644 --- a/lisa-examples/src/main/scala/Example.scala +++ b/lisa-examples/src/main/scala/Example.scala @@ -1,9 +1,10 @@ +import lisa.Main import lisa.kernel.fol.FOL.* import lisa.kernel.proof.SCProof import lisa.kernel.proof.SCProofChecker import lisa.kernel.proof.SCProofChecker.* import lisa.kernel.proof.SequentCalculus.* -import lisa.proven.tactics.SimplePropositionalSolver.solveSequent +import lisa.automation.kernel.SimplePropositionalSolver.solveSequent import lisa.tptp.KernelParser.* import lisa.tptp.ProblemGatherer.* import lisa.tptp.* @@ -26,7 +27,7 @@ object Example { * The last two lines don't need to be changed. */ def proofExample(): Unit = { - object Ex extends lisa.proven.Main { + object Ex extends Main { THEOREM("fixedPointDoubleApplication") of "" PROOF { steps( ???, @@ -139,7 +140,7 @@ object Example { p.formulas.foreach(printAnnotatedFormula) } - val P = SchematicNPredicateLabel("P", 1) + val P = SchematicPredicateLabel("P", 1) val Q = PredicateFormula(VariableFormulaLabel("Q"), Seq()) val R = PredicateFormula(VariableFormulaLabel("R"), Seq()) diff --git a/lisa-front/src/main/scala/lisa/front/fol/FOL.scala b/lisa-front/src/main/scala/lisa/front/fol/FOL.scala new file mode 100644 index 0000000000000000000000000000000000000000..7119aa09b4be9d2cf0f7a2e454b032dd07b516a7 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/fol/FOL.scala @@ -0,0 +1,32 @@ +package lisa.front.fol + +import lisa.front.fol.conversions.from.* +import lisa.front.fol.conversions.to.* +import lisa.front.fol.definitions.* +import lisa.front.fol.ops.* +import lisa.front.fol.utils.* +import lisa.front.printer.FrontPositionedPrinter + +/** + * The package containing all the definitions and utilities to work with first order logic (FOL). + */ +object FOL + extends FormulaDefinitions + with TermConversionsTo + with FormulaConversionsTo + with TermConversionsFrom + with FormulaConversionsFrom + with TermUtils + with FormulaUtils + with TermOps + with FormulaOps { + + override protected def pretty(term: Term): String = FrontPositionedPrinter.prettyTerm(term) + override protected def pretty(formula: Formula): String = FrontPositionedPrinter.prettyFormula(formula) + + type LabelType = Label + type SchematicLabelType = SchematicLabel + type LabeledTreeType[A <: Label] = LabeledTree[A] + type WithArityType[N <: Arity] = WithArity[N] + +} diff --git a/lisa-front/src/main/scala/lisa/front/fol/conversions/FrontKernelMappings.scala b/lisa-front/src/main/scala/lisa/front/fol/conversions/FrontKernelMappings.scala new file mode 100644 index 0000000000000000000000000000000000000000..9c381ed18405abc7aefd8b605aef5c2087798455 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/fol/conversions/FrontKernelMappings.scala @@ -0,0 +1,33 @@ +package lisa.front.fol.conversions + +import lisa.front.fol.definitions.FormulaDefinitions + +trait FrontKernelMappings extends FormulaDefinitions { + + protected val connectorsTo: Map[ConstantConnectorLabel[?], lisa.kernel.fol.FOL.ConnectorLabel] = Map( + neg -> lisa.kernel.fol.FOL.Neg, + implies -> lisa.kernel.fol.FOL.Implies, + iff -> lisa.kernel.fol.FOL.Iff, + and -> lisa.kernel.fol.FOL.And, + or -> lisa.kernel.fol.FOL.Or + ) + protected val bindersTo: Map[BinderLabel, lisa.kernel.fol.FOL.BinderLabel] = Map( + forall -> lisa.kernel.fol.FOL.Forall, + exists -> lisa.kernel.fol.FOL.Exists, + existsOne -> lisa.kernel.fol.FOL.ExistsOne + ) + protected val predicatesTo: Map[ConstantPredicateLabel[?], lisa.kernel.fol.FOL.ConstantPredicateLabel] = Map( + equality -> lisa.kernel.fol.FOL.equality.asInstanceOf[lisa.kernel.fol.FOL.ConstantPredicateLabel] // Sadly... + ) + + private def reverseMap[U, V](map: Map[U, V]): Map[V, U] = { + val newMap = map.map(_.swap) + assert(newMap.size == map.size) + newMap + } + + protected val connectorsFrom: Map[lisa.kernel.fol.FOL.ConnectorLabel, ConstantConnectorLabel[?]] = reverseMap(connectorsTo) + protected val bindersFrom: Map[lisa.kernel.fol.FOL.BinderLabel, BinderLabel] = reverseMap(bindersTo) + protected val predicatesFrom: Map[lisa.kernel.fol.FOL.ConstantPredicateLabel, ConstantPredicateLabel[?]] = reverseMap(predicatesTo) + +} diff --git a/lisa-front/src/main/scala/lisa/front/fol/conversions/from/FormulaConversionsFrom.scala b/lisa-front/src/main/scala/lisa/front/fol/conversions/from/FormulaConversionsFrom.scala new file mode 100644 index 0000000000000000000000000000000000000000..f3fe0ebb29eac58a2335e78df05daa6a506d35d2 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/fol/conversions/from/FormulaConversionsFrom.scala @@ -0,0 +1,43 @@ +package lisa.front.fol.conversions.from + +import lisa.front.fol.conversions.FrontKernelMappings +import lisa.front.fol.definitions.FormulaDefinitions + +trait FormulaConversionsFrom extends FormulaDefinitions with TermConversionsFrom with FrontKernelMappings { + + def fromKernel(label: lisa.kernel.fol.FOL.ConstantPredicateLabel): ConstantPredicateLabel[?] = + predicatesFrom.getOrElse(label, ConstantPredicateLabel.unsafe(label.id, label.arity)) + def fromKernel(label: lisa.kernel.fol.FOL.SchematicPredicateLabel): SchematicPredicateLabel[?] = + SchematicPredicateLabel.unsafe(label.id, label.arity) + def fromKernel(label: lisa.kernel.fol.FOL.VariableFormulaLabel): SchematicPredicateLabel[?] = + SchematicPredicateLabel.unsafe(label.id, 0) + + /** + * Lifts a predicate label from the kernel to the front. + * @param label the label in the kernel + * @return the label in the front + */ + def fromKernel(label: lisa.kernel.fol.FOL.PredicateLabel): PredicateLabel[?] = label match { + case schem: lisa.kernel.fol.FOL.SchematicPredicateLabel => fromKernel(schem) + case constant: lisa.kernel.fol.FOL.ConstantPredicateLabel => fromKernel(constant) + case variable: lisa.kernel.fol.FOL.VariableFormulaLabel => fromKernel(variable) + } + + /** + * Lifts a connector label from the kernel to the front. + * @param label the label in the kernel + * @return the label in the front + */ + def fromKernel(label: lisa.kernel.fol.FOL.ConnectorLabel): ConstantConnectorLabel[?] = connectorsFrom(label) + + /** + * Lifts a formula from the kernel to the front. + * @param formula the formula in the kernel + * @return the formula in the front + */ + def fromKernel(formula: lisa.kernel.fol.FOL.Formula): Formula = formula match { + case lisa.kernel.fol.FOL.PredicateFormula(label, args) => PredicateFormula.unsafe(fromKernel(label), args.map(fromKernel)) + case lisa.kernel.fol.FOL.ConnectorFormula(label, args) => ConnectorFormula.unsafe(fromKernel(label), args.map(fromKernel)) + case lisa.kernel.fol.FOL.BinderFormula(label, bound, inner) => BinderFormula(bindersFrom(label), VariableLabel(bound.id), fromKernel(inner)) + } +} diff --git a/lisa-front/src/main/scala/lisa/front/fol/conversions/from/TermConversionsFrom.scala b/lisa-front/src/main/scala/lisa/front/fol/conversions/from/TermConversionsFrom.scala new file mode 100644 index 0000000000000000000000000000000000000000..c7d85d17d204ee81348fc95a972fe68ac6ef67d6 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/fol/conversions/from/TermConversionsFrom.scala @@ -0,0 +1,31 @@ +package lisa.front.fol.conversions.from + +import lisa.front.fol.definitions.TermDefinitions + +trait TermConversionsFrom extends TermDefinitions { + + def fromKernel(label: lisa.kernel.fol.FOL.ConstantFunctionLabel): ConstantFunctionLabel[?] = + ConstantFunctionLabel.unsafe(label.id, label.arity) + def fromKernel(label: lisa.kernel.fol.FOL.SchematicTermLabel): SchematicTermLabel[?] = + SchematicTermLabel.unsafe(label.id, label.arity) + + /** + * Lifts a function label from the kernel to the front. + * @param label the label in the kernel + * @return the label in the front + */ + def fromKernel(label: lisa.kernel.fol.FOL.TermLabel): TermLabel[?] = label match { + case constant: lisa.kernel.fol.FOL.ConstantFunctionLabel => fromKernel(constant) + case schematic: lisa.kernel.fol.FOL.SchematicTermLabel => fromKernel(schematic) + } + + /** + * Lifts a term from the kernel to the front. + * @param term the term in the kernel + * @return the term in the front + */ + def fromKernel(term: lisa.kernel.fol.FOL.Term): Term = term match { + case lisa.kernel.fol.FOL.VariableTerm(label) => VariableTerm(VariableLabel(label.id)) + case lisa.kernel.fol.FOL.Term(label, args) => Term.unsafe(fromKernel(label), args.map(fromKernel)) + } +} diff --git a/lisa-front/src/main/scala/lisa/front/fol/conversions/to/FormulaConversionsTo.scala b/lisa-front/src/main/scala/lisa/front/fol/conversions/to/FormulaConversionsTo.scala new file mode 100644 index 0000000000000000000000000000000000000000..3dfdbc805a6f39337b163bd41c39999242533de0 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/fol/conversions/to/FormulaConversionsTo.scala @@ -0,0 +1,79 @@ +package lisa.front.fol.conversions.to + +import lisa.front.fol.conversions.FrontKernelMappings +import lisa.front.fol.definitions.FormulaDefinitions + +trait FormulaConversionsTo extends FormulaDefinitions with TermConversionsTo with FrontKernelMappings { + + def toKernel(label: ConstantConnectorLabel[?]): lisa.kernel.fol.FOL.ConnectorLabel = connectorsTo(label) + + def toKernel(label: ConnectorLabel[?]): lisa.kernel.fol.FOL.ConnectorLabel = label match { + case constant: ConstantConnectorLabel[?] => toKernel(constant) + case _: SchematicConnectorLabel[?] => throw new UnsupportedOperationException + } + + def toKernel(label: ConstantPredicateLabel[?]): lisa.kernel.fol.FOL.ConstantPredicateLabel = + predicatesTo.getOrElse(label, lisa.kernel.fol.FOL.ConstantPredicateLabel(label.id, label.arity)) + + def toKernel(label: SchematicPredicateLabel[0]): lisa.kernel.fol.FOL.VariableFormulaLabel = { + lisa.kernel.fol.FOL.VariableFormulaLabel(label.id) + } + + def toKernel(label: SchematicPredicateLabel[?]): lisa.kernel.fol.FOL.SchematicVarOrPredLabel = { + if (label.arity == 0) lisa.kernel.fol.FOL.VariableFormulaLabel(label.id) + else lisa.kernel.fol.FOL.SchematicPredicateLabel(label.id, label.arity) + } + + def toKernel(label: PredicateLabel[?]): lisa.kernel.fol.FOL.PredicateLabel = label match { + case constant: ConstantPredicateLabel[?] => toKernel(constant) + case schematic: SchematicPredicateLabel[?] => toKernel(schematic) + } + + def toKernel(label: BinderLabel): lisa.kernel.fol.FOL.BinderLabel = bindersTo(label) + + /** + * Translates a label from the front to the kernel. + * @param label the label in the front + * @return the label in the kernel + */ + def toKernel(label: FormulaLabel): lisa.kernel.fol.FOL.FormulaLabel = label match { + case predicate: PredicateLabel[?] => toKernel(predicate) + case connector: ConnectorLabel[?] => toKernel(connector) + case binder: BinderLabel => toKernel(binder) + } + + def toKernel(formula: PredicateFormula): lisa.kernel.fol.FOL.PredicateFormula = + lisa.kernel.fol.FOL.PredicateFormula(toKernel(formula.label), formula.args.map(toKernel)) + + def toKernel(formula: ConnectorFormula): lisa.kernel.fol.FOL.ConnectorFormula = + lisa.kernel.fol.FOL.ConnectorFormula(toKernel(formula.label), formula.args.map(toKernel)) + + def toKernel(formula: BinderFormula): lisa.kernel.fol.FOL.BinderFormula = + lisa.kernel.fol.FOL.BinderFormula(toKernel(formula.label), toKernel(formula.bound), toKernel(formula.inner)) + + /** + * Translates a formula from the front to the kernel. + * @param formula the formula in the front + * @return the formula in the kernel + */ + def toKernel(formula: Formula): lisa.kernel.fol.FOL.Formula = formula match { + case predicate: PredicateFormula => toKernel(predicate) + case connector: ConnectorFormula => toKernel(connector) + case binder: BinderFormula => toKernel(binder) + } + + given Conversion[PredicateFormula, lisa.kernel.fol.FOL.PredicateFormula] = toKernel + given Conversion[ConnectorFormula, lisa.kernel.fol.FOL.ConnectorFormula] = toKernel + given Conversion[BinderFormula, lisa.kernel.fol.FOL.BinderFormula] = toKernel + given Conversion[Formula, lisa.kernel.fol.FOL.Formula] = toKernel + given Conversion[ConstantPredicateLabel[?], lisa.kernel.fol.FOL.ConstantPredicateLabel] = toKernel + given Conversion[SchematicPredicateLabel[0], lisa.kernel.fol.FOL.VariableFormulaLabel] = toKernel + given Conversion[SchematicPredicateLabel[?], lisa.kernel.fol.FOL.SchematicFormulaLabel] = toKernel + given Conversion[PredicateLabel[?], lisa.kernel.fol.FOL.PredicateLabel] = toKernel + given Conversion[ConnectorLabel[?], lisa.kernel.fol.FOL.ConnectorLabel] = toKernel + given Conversion[BinderLabel, lisa.kernel.fol.FOL.BinderLabel] = toKernel + given Conversion[FormulaLabel, lisa.kernel.fol.FOL.FormulaLabel] = toKernel + + given Conversion[(Formula, Formula), (lisa.kernel.fol.FOL.Formula, lisa.kernel.fol.FOL.Formula)] = (a, b) => (a, b) + +} diff --git a/lisa-front/src/main/scala/lisa/front/fol/conversions/to/TermConversionsTo.scala b/lisa-front/src/main/scala/lisa/front/fol/conversions/to/TermConversionsTo.scala new file mode 100644 index 0000000000000000000000000000000000000000..86f7a9e47039f92f8aee9e7a7431e8f17dfdf2f4 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/fol/conversions/to/TermConversionsTo.scala @@ -0,0 +1,45 @@ +package lisa.front.fol.conversions.to + +import lisa.front.fol.definitions.TermDefinitions + +trait TermConversionsTo extends TermDefinitions { + + def toKernel(label: ConstantFunctionLabel[?]): lisa.kernel.fol.FOL.ConstantFunctionLabel = + lisa.kernel.fol.FOL.ConstantFunctionLabel(label.id, label.arity) + + def toKernel(label: SchematicTermLabel[?]): lisa.kernel.fol.FOL.SchematicTermLabel = { + if (label.arity == 0) lisa.kernel.fol.FOL.VariableLabel(label.id) + else lisa.kernel.fol.FOL.SchematicFunctionLabel(label.id, label.arity) + } + + def toKernel(label: SchematicTermLabel[0]): lisa.kernel.fol.FOL.VariableLabel = { + lisa.kernel.fol.FOL.VariableLabel(label.id) + } + + /** + * Translates a label from the front to the kernel. + * @param label the label in the front + * @return the label in the kernel + */ + def toKernel(label: TermLabel[?]): lisa.kernel.fol.FOL.TermLabel = label match { + case label: ConstantFunctionLabel[?] => toKernel(label) + case label: SchematicTermLabel[?] => toKernel(label) + } + + /** + * Translates a term from the front to the kernel. + * @param term the term in the front + * @return the term in the kernel + */ + def toKernel(term: Term): lisa.kernel.fol.FOL.Term = + lisa.kernel.fol.FOL.Term(toKernel(term.label), term.args.map(toKernel)) + + given Conversion[VariableLabel, lisa.kernel.fol.FOL.VariableLabel] = toKernel + given Conversion[ConstantFunctionLabel[?], lisa.kernel.fol.FOL.ConstantFunctionLabel] = toKernel + given Conversion[SchematicTermLabel[?], lisa.kernel.fol.FOL.SchematicTermLabel] = toKernel + given Conversion[TermLabel[?], lisa.kernel.fol.FOL.TermLabel] = toKernel + given Conversion[Term, lisa.kernel.fol.FOL.Term] = toKernel + + given Conversion[(Term, Term), (lisa.kernel.fol.FOL.Term, lisa.kernel.fol.FOL.Term)] = (a, b) => (a, b) + +} diff --git a/lisa-front/src/main/scala/lisa/front/fol/definitions/CommonDefinitions.scala b/lisa-front/src/main/scala/lisa/front/fol/definitions/CommonDefinitions.scala new file mode 100644 index 0000000000000000000000000000000000000000..99ad6340cb43ff41e25bc65e84cece2f995fe140 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/fol/definitions/CommonDefinitions.scala @@ -0,0 +1,41 @@ +package lisa.front.fol.definitions + +trait CommonDefinitions { + + /** + * The label of a node. + */ + private[fol] trait Label { + val id: String + } + + /** + * A label node that is considered schematic (namely one that can be instantiated). + */ + private[fol] trait SchematicLabel extends Label + + /** + * A labeled tree. + * @tparam A the label of that tree + */ + private[fol] trait LabeledTree[A <: Label] { + val label: A + } + + /** + * Statically typed arity. + */ + type Arity = Int & Singleton + + /** + * A node with arity. + * @tparam N the arity as a static type, or `?` if unknown + */ + private[fol] trait WithArity[N <: Arity] { + val arity: N + } + + private[fol] def isLegalApplication(withArity: WithArity[?], args: Seq[?]): Boolean = + withArity.arity == -1 || withArity.arity == args.size + +} diff --git a/lisa-front/src/main/scala/lisa/front/fol/definitions/FormulaDefinitions.scala b/lisa-front/src/main/scala/lisa/front/fol/definitions/FormulaDefinitions.scala new file mode 100644 index 0000000000000000000000000000000000000000..5a0d72774fb1a94205a9e42fff8b236b8983dfc3 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/fol/definitions/FormulaDefinitions.scala @@ -0,0 +1,39 @@ +package lisa.front.fol.definitions + +trait FormulaDefinitions extends FormulaLabelDefinitions with TermDefinitions { + + protected def pretty(formula: Formula): String + + /** + * @see [[lisa.kernel.fol.FOL.Formula]] + */ + sealed abstract class Formula extends LabeledTree[FormulaLabel] { + override def toString: String = pretty(this) + } + + /** + * @see [[lisa.kernel.fol.FOL.PredicateFormula]] + */ + final case class PredicateFormula protected (label: PredicateLabel[?], args: Seq[Term]) extends Formula { + require(isLegalApplication(label, args)) + } + object PredicateFormula { + def unsafe(label: PredicateLabel[?], args: Seq[Term]): PredicateFormula = PredicateFormula(label, args) + } + + /** + * @see [[lisa.kernel.fol.FOL.ConnectorFormula]] + */ + final case class ConnectorFormula protected (label: ConnectorLabel[?], args: Seq[Formula]) extends Formula { + require(isLegalApplication(label, args)) + } + object ConnectorFormula { + def unsafe(label: ConnectorLabel[?], args: Seq[Formula]): ConnectorFormula = ConnectorFormula(label, args) + } + + /** + * @see [[lisa.kernel.fol.FOL.BinderFormula]] + */ + final case class BinderFormula(label: BinderLabel, bound: VariableLabel, inner: Formula) extends Formula + +} diff --git a/lisa-front/src/main/scala/lisa/front/fol/definitions/FormulaLabelDefinitions.scala b/lisa-front/src/main/scala/lisa/front/fol/definitions/FormulaLabelDefinitions.scala new file mode 100644 index 0000000000000000000000000000000000000000..3ef97f90db1ffac6f166e565a19ec622c78e1dbb --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/fol/definitions/FormulaLabelDefinitions.scala @@ -0,0 +1,112 @@ +package lisa.front.fol.definitions + +trait FormulaLabelDefinitions extends CommonDefinitions { + + /** + * @see [[lisa.kernel.fol.FOL.FormulaLabel]] + */ + sealed abstract class FormulaLabel extends Label + + /** + * @see [[lisa.kernel.fol.FOL.PredicateLabel]] + */ + sealed abstract class PredicateLabel[N <: Arity] extends FormulaLabel with WithArity[N] + + /** + * @see [[lisa.kernel.fol.FOL.ConstantPredicateLabel]] + */ + final case class ConstantPredicateLabel[N <: Arity] protected (id: String, arity: N) extends PredicateLabel[N] + + /** + * @see [[lisa.kernel.fol.FOL.SchematicFormulaLabel]] + */ + final case class SchematicPredicateLabel[N <: Arity] protected (id: String, arity: N) extends PredicateLabel[N] with SchematicLabel + + object ConstantPredicateLabel { + def apply[N <: Arity](id: String)(using v: ValueOf[N]): ConstantPredicateLabel[N] = ConstantPredicateLabel(id, v.value) + def unsafe(id: String, arity: Int): ConstantPredicateLabel[?] = ConstantPredicateLabel(id, arity) + } + object SchematicPredicateLabel { + def apply[N <: Arity](id: String)(using v: ValueOf[N]): SchematicPredicateLabel[N] = SchematicPredicateLabel(id, v.value) + def unsafe(id: String, arity: Int): SchematicPredicateLabel[?] = SchematicPredicateLabel(id, arity) + } + + /** + * @see [[lisa.kernel.fol.FOL.equality]] + */ + val equality: ConstantPredicateLabel[2] = ConstantPredicateLabel("=") + + /** + * For completeness, the front provides constant & schematic connectors. + * The kernel only supports constant such labels. The compromise is to only work with those in the front and avoid + * translating them into the kernel. + * @see [[PredicateLabel]] + * @see [[TermLabelDefinitions.TermLabel]] + */ + sealed abstract class ConnectorLabel[N <: Arity] extends FormulaLabel with WithArity[N] + + /** + * @see [[lisa.kernel.fol.FOL.ConnectorLabel]] + */ + final case class ConstantConnectorLabel[N <: Arity] protected (id: String, arity: N) extends ConnectorLabel[N] + + /** + * A schematic connector label, exclusive to the front. + * @see [[ConnectorLabel]] + */ + final case class SchematicConnectorLabel[N <: Arity] protected (id: String, arity: N) extends ConnectorLabel[N] with SchematicLabel + + object ConstantConnectorLabel { + private[FormulaLabelDefinitions] def apply[N <: Arity](id: String)(using v: ValueOf[N]): ConstantConnectorLabel[N] = ConstantConnectorLabel(id, v.value) + } + object SchematicConnectorLabel { + def apply[N <: Arity](id: String)(using v: ValueOf[N]): SchematicConnectorLabel[N] = SchematicConnectorLabel(id, v.value) + def unsafe(id: String, arity: Int): SchematicConnectorLabel[?] = SchematicConnectorLabel(id, arity) + } + + /** + * @see [[lisa.kernel.fol.FOL.Neg]] + */ + val neg: ConstantConnectorLabel[1] = ConstantConnectorLabel("¬") + + /** + * @see [[lisa.kernel.fol.FOL.Implies]] + */ + val implies: ConstantConnectorLabel[2] = ConstantConnectorLabel("⇒") + + /** + * @see [[lisa.kernel.fol.FOL.Iff]] + */ + val iff: ConstantConnectorLabel[2] = ConstantConnectorLabel("↔") + + /** + * @see [[lisa.kernel.fol.FOL.And]] + */ + val and: ConstantConnectorLabel[2] = ConstantConnectorLabel("∧") + + /** + * @see [[lisa.kernel.fol.FOL.Or]] + */ + val or: ConstantConnectorLabel[2] = ConstantConnectorLabel("∨") + + /** + * @see [[lisa.kernel.fol.FOL.BinderLabel]] + */ + final case class BinderLabel private[FormulaLabelDefinitions] (id: String) extends FormulaLabel + + /** + * @see [[lisa.kernel.fol.FOL.Forall]] + */ + val forall: BinderLabel = BinderLabel("∀") + + /** + * @see [[lisa.kernel.fol.FOL.Exists]] + */ + val exists: BinderLabel = BinderLabel("∃") + + /** + * @see [[lisa.kernel.fol.FOL.ExistsOne]] + */ + val existsOne: BinderLabel = BinderLabel("∃!") + +} diff --git a/lisa-front/src/main/scala/lisa/front/fol/definitions/TermDefinitions.scala b/lisa-front/src/main/scala/lisa/front/fol/definitions/TermDefinitions.scala new file mode 100644 index 0000000000000000000000000000000000000000..f8e94bfeb35150aafd9c664267b2d47cff864967 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/fol/definitions/TermDefinitions.scala @@ -0,0 +1,29 @@ +package lisa.front.fol.definitions + +trait TermDefinitions extends TermLabelDefinitions { + + protected def pretty(term: Term): String + + /** + * @see [[lisa.kernel.fol.FOL.Term]] + */ + final case class Term protected (label: TermLabel[?], args: Seq[Term]) extends LabeledTree[TermLabel[?]] { + require(isLegalApplication(label, args)) + override def toString: String = pretty(this) + } + object Term { + def unsafe(label: TermLabel[?], args: Seq[Term]): Term = Term(label, args) + } + + /** + * @see [[lisa.kernel.fol.FOL.VariableTerm]] + */ + object VariableTerm extends (VariableLabel => Term) { + def apply(label: VariableLabel): Term = Term.unsafe(label, Seq()) + def unapply(t: Term): Option[VariableLabel] = t.label match { + case l: SchematicTermLabel[?] if l.arity == 0 => Some(l.asInstanceOf[VariableLabel]) + case _ => None + } + } + +} diff --git a/lisa-front/src/main/scala/lisa/front/fol/definitions/TermLabelDefinitions.scala b/lisa-front/src/main/scala/lisa/front/fol/definitions/TermLabelDefinitions.scala new file mode 100644 index 0000000000000000000000000000000000000000..5595dc666584d3ad7f3a638f27eb89dbc48eccc9 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/fol/definitions/TermLabelDefinitions.scala @@ -0,0 +1,36 @@ +package lisa.front.fol.definitions + +trait TermLabelDefinitions extends CommonDefinitions { + + /** + * @see [[lisa.kernel.fol.FOL.TermLabel]] + */ + sealed abstract class TermLabel[N <: Arity] extends Label with WithArity[N] + + /** + * @see [[lisa.kernel.fol.FOL.ConstantLabel]] + */ + final case class ConstantFunctionLabel[N <: Arity] protected (id: String, arity: N) extends TermLabel[N] + + /** + * @see [[lisa.kernel.fol.FOL.SchematicTermLabel]] + */ + final case class SchematicTermLabel[N <: Arity] protected (id: String, arity: N) extends TermLabel[N] with SchematicLabel + + type VariableLabel = SchematicTermLabel[0] + object VariableLabel { + def unapply(l: VariableLabel): Option[String] = Some(l.id) + def apply(l: String): VariableLabel = SchematicTermLabel[0](l) + } + + object ConstantFunctionLabel { + def apply[N <: Arity](id: String)(using v: ValueOf[N]): ConstantFunctionLabel[N] = ConstantFunctionLabel(id, v.value) + def unsafe(id: String, arity: Int): ConstantFunctionLabel[?] = ConstantFunctionLabel(id, arity) + } + + object SchematicTermLabel { + def apply[N <: Arity](id: String)(using v: ValueOf[N]): SchematicTermLabel[N] = SchematicTermLabel(id, v.value) + def unsafe(id: String, arity: Int): SchematicTermLabel[?] = SchematicTermLabel(id, arity) + } + +} diff --git a/lisa-front/src/main/scala/lisa/front/fol/ops/CommonOps.scala b/lisa-front/src/main/scala/lisa/front/fol/ops/CommonOps.scala new file mode 100644 index 0000000000000000000000000000000000000000..e1a0cccf2ca80a9179a10dc5288df410105c2557 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/fol/ops/CommonOps.scala @@ -0,0 +1,67 @@ +package lisa.front.fol.ops + +import lisa.front.fol.definitions.CommonDefinitions + +import scala.compiletime.ops.int.- + +trait CommonOps extends CommonDefinitions { + + /** + * Creates a tuple type with <code>N</code> types <code>T</code>. + * @tparam T the type of that tuple's members + * @tparam N the arity of that tuple + */ + protected type FillTuple[T, N <: Arity] <: Tuple & Matchable = N match { + case 0 => EmptyTuple + case _ => T *: FillTuple[T, N - 1] + } + + /** + * Similar to [[FillTuple]], except that for convenience when used as function arguments, the tuple of arity one + * is replaced by its own element. + * @tparam T + * @tparam N + */ + type FillArgs[T <: Matchable, N <: Arity] <: (T | Tuple) & Matchable = N match { + case 1 => T + case _ => FillTuple[T, N] + } + + // given liftArgsConversion1[U, V]: Conversion[V, FillArgs[U, 0] => V] = v => _ => v + // given liftArgsConversion2[U, V]: Conversion[() => V, FillArgs[U, 0] => V] = v => _ => v() + + private[front] def fillTupleParameters[N <: Arity, T <: Matchable, U](name: String => T, n: N, f: FillArgs[T, N] => U, taken: Set[String] = Set.empty): (FillArgs[T, N], U) = { + val newIds = LazyList.from(0).map(i => s"x$i").filter(!taken.contains(_)).take(n).toIndexedSeq + val parameters = fillTuple[T, N](n, i => name(newIds(i))) + (parameters, f(parameters)) + } + + protected def tuple2seq[T <: Matchable, N <: Arity](any: FillArgs[T, N]): Seq[T] = + any match { + case tuple: Tuple => tuple.productIterator.toSeq.asInstanceOf[Seq[T]] + case _ => Seq(any.asInstanceOf[T]) // Safe cast + } + + /** + * Fills a tuple with <code>n</code> elements of type <code>T</code>. + * @param n the arity of the tuple + * @param f the function generating the elements + * @tparam T the type of the elements + * @tparam N the arity type + * @return the generated tuple + */ + def fillTuple[T <: Matchable, N <: Arity](n: N, f: Int => T): FillArgs[T, N] = + if (n == 1) + f(0).asInstanceOf[FillArgs[T, N]] + else + (0 until n).foldRight(EmptyTuple: Tuple)((i, acc) => f(i) *: acc).asInstanceOf[FillArgs[T, N]] + + extension [T <: Matchable, N <: Arity](tuple: FillArgs[T, N]) { + + /** + * Converts a tuple into a sequence of value. Loses all typing information, but simplifies the usage. + */ + def toSeq: Seq[T] = tuple2seq(tuple) + } + +} diff --git a/lisa-front/src/main/scala/lisa/front/fol/ops/FormulaOps.scala b/lisa-front/src/main/scala/lisa/front/fol/ops/FormulaOps.scala new file mode 100644 index 0000000000000000000000000000000000000000..455582c0aa9d1a97a6ad912ea49e9e7b0e32c58d --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/fol/ops/FormulaOps.scala @@ -0,0 +1,76 @@ +package lisa.front.fol.ops + +import lisa.front.fol.definitions.FormulaDefinitions + +trait FormulaOps extends FormulaDefinitions with CommonOps { + + // lampepfl/dotty#14907 + + // extension[N <: Arity] (label: PredicateLabel[N]) + // def apply(args: FillArgs[Term, N]): PredicateFormula = PredicateFormula.unsafe(label, tuple2seq(args)) + extension (label: PredicateLabel[2]) { def apply(a: Term, b: Term): PredicateFormula = PredicateFormula.unsafe(label, Seq(a, b)) } + extension (label: PredicateLabel[1]) { def apply(a: Term): PredicateFormula = PredicateFormula.unsafe(label, Seq(a)) } + extension (label: PredicateLabel[0]) { def apply(): PredicateFormula = PredicateFormula.unsafe(label, Seq.empty) } + + // extension[N <: Arity] (label: ConnectorLabel[N]) + // def apply(args: FillArgs[Formula, N]): ConnectorFormula = ConnectorFormula.unsafe(label, tuple2seq(args)) + extension (label: ConnectorLabel[2]) { def apply(a: Formula, b: Formula): ConnectorFormula = ConnectorFormula.unsafe(label, Seq(a, b)) } + extension (label: ConnectorLabel[1]) { def apply(a: Formula): ConnectorFormula = ConnectorFormula.unsafe(label, Seq(a)) } + extension (label: ConnectorLabel[0]) { def apply(): ConnectorFormula = ConnectorFormula.unsafe(label, Seq.empty) } + + extension [N <: Arity](label: BinderLabel) { def apply(bound: VariableLabel, inner: Formula): BinderFormula = BinderFormula(label, bound, inner) } + + given Conversion[ConstantPredicateLabel[0], PredicateFormula] = PredicateFormula.unsafe(_, Seq.empty) + given Conversion[SchematicPredicateLabel[0], PredicateFormula] = PredicateFormula.unsafe(_, Seq.empty) + given Conversion[PredicateLabel[0], PredicateFormula] = PredicateFormula.unsafe(_, Seq.empty) + + given Conversion[ConstantConnectorLabel[0], ConnectorFormula] = ConnectorFormula.unsafe(_, Seq.empty) + given Conversion[SchematicConnectorLabel[0], ConnectorFormula] = ConnectorFormula.unsafe(_, Seq.empty) + given Conversion[ConnectorLabel[0], ConnectorFormula] = ConnectorFormula.unsafe(_, Seq.empty) + + @deprecated + given Conversion[Formula, FormulaLabel] = _.label + + extension (f: Formula) { + def unary_! : ConnectorFormula = ConnectorFormula.unsafe(neg, Seq(f)) + infix def ==>(g: Formula): ConnectorFormula = ConnectorFormula.unsafe(implies, Seq(f, g)) + infix def <=>(g: Formula): ConnectorFormula = ConnectorFormula.unsafe(iff, Seq(f, g)) + infix def /\(g: Formula): ConnectorFormula = ConnectorFormula.unsafe(and, Seq(f, g)) + infix def \/(g: Formula): ConnectorFormula = ConnectorFormula.unsafe(or, Seq(f, g)) + } + + extension (t: Term) { + infix def ===(u: Term): PredicateFormula = PredicateFormula.unsafe(equality, Seq(t, u)) + } + + // Extractors + + object ! { + def unapply(f: Formula): Option[Formula] = f match { + case ConnectorFormula(`neg`, Seq(g)) => Some(g) + case _ => None + } + } + + sealed abstract class UnapplyBinaryConnector(label: ConnectorLabel[2]) { + def unapply(f: Formula): Option[(Formula, Formula)] = f match { + case ConnectorFormula(`label`, Seq(a, b)) => Some((a, b)) + case _ => None + } + } + + object ==> extends UnapplyBinaryConnector(implies) + object <=> extends UnapplyBinaryConnector(iff) + object /\ extends UnapplyBinaryConnector(and) + object \/ extends UnapplyBinaryConnector(or) + + sealed abstract class UnapplyBinaryPredicate(label: PredicateLabel[2]) { + def unapply(f: Formula): Option[(Term, Term)] = f match { + case PredicateFormula(`label`, Seq(a, b)) => Some((a, b)) + case _ => None + } + } + + object === extends UnapplyBinaryPredicate(equality) + +} diff --git a/lisa-front/src/main/scala/lisa/front/fol/ops/TermOps.scala b/lisa-front/src/main/scala/lisa/front/fol/ops/TermOps.scala new file mode 100644 index 0000000000000000000000000000000000000000..3361d8d84aa3af140a6c0c54306bf585b506d044 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/fol/ops/TermOps.scala @@ -0,0 +1,23 @@ +package lisa.front.fol.ops + +import lisa.front.fol.definitions.TermDefinitions + +trait TermOps extends TermDefinitions with CommonOps { + + extension [N <: Arity](label: TermLabel[N]) { + def apply(args: FillArgs[Term, N]): Term = Term.unsafe(label, tuple2seq(args)) + } + // extension (label: TermLabel[2]) + // def apply(a: Term, b: Term): Term = Term.unsafe(label, Seq(a, b)) + // extension (label: TermLabel[1]) + // def apply(a: Term): Term = Term.unsafe(label, Seq(a)) + extension (label: TermLabel[0]) { def apply(): Term = Term.unsafe(label, Seq.empty) } + + given Conversion[ConstantFunctionLabel[0], Term] = Term.unsafe(_, Seq.empty) + given Conversion[SchematicTermLabel[0], Term] = Term.unsafe(_, Seq.empty) + given Conversion[TermLabel[0], Term] = Term.unsafe(_, Seq.empty) + + @deprecated + given Conversion[Term, TermLabel[?]] = _.label + +} diff --git a/lisa-front/src/main/scala/lisa/front/fol/utils/CommonUtils.scala b/lisa-front/src/main/scala/lisa/front/fol/utils/CommonUtils.scala new file mode 100644 index 0000000000000000000000000000000000000000..37bcbbc2f061fe0cf06b8ec181cb873d5d364a99 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/fol/utils/CommonUtils.scala @@ -0,0 +1,93 @@ +package lisa.front.fol.utils + +import lisa.front.fol.definitions.CommonDefinitions +import lisa.front.fol.ops.CommonOps + +import scala.annotation.targetName + +trait CommonUtils extends CommonDefinitions with CommonOps { + + /** + * Creates a fresh id with respect to a set of taken ids. A base id can optionally be specified. + * @param taken the set of taken ids + * @param base an optional base id + * @return a fresh id + */ + def freshId(taken: Set[String], base: String = "x"): String = { + def findFirst(i: Int): String = { + val id = s"${base}_$i" + if (taken.contains(id)) findFirst(i + 1) else id + } + findFirst(0) + } + + /** + * Creates a sequence of fresh ids with respect to a set of taken ids. A base id can optionally be specified. + * @param taken the set of taken ids + * @param base an optional base id + * @return a sequent of fresh ids + */ + def freshIds(taken: Set[String], n: Int, base: String = "x"): Seq[String] = { + require(n >= 0) + def findMany(i: Int, n: Int, taken: Set[String], acc: Seq[String]): Seq[String] = { + if (n > 0) { + val id = s"${base}_$i" + if (taken.contains(id)) findMany(i + 1, n, taken, acc) else findMany(i + 1, n - 1, taken + id, id +: acc) + } else { + acc + } + } + findMany(0, n, taken, Seq.empty).reverse + } + + /** + * Represents the renaming of a label. + * @param from the label that should be renamed + * @param to the label it should be renamed to + */ + case class RenamedLabel[L <: Label & WithArity[?], A <: L & SchematicLabel, B <: L] private (from: A, to: B) + object RenamedLabel { + @targetName("applySafe") + def apply[N <: Arity, L <: Label & WithArity[N], A <: L & SchematicLabel, B <: L](from: A, to: B): RenamedLabel[L, A, B] = new RenamedLabel(from, to) + def unsafe[L <: Label & WithArity[?], A <: L & SchematicLabel, B <: L](from: A, to: B): RenamedLabel[L, A, B] = new RenamedLabel(from, to) + } + extension [L <: Label & WithArity[?], A <: L & SchematicLabel](renamed: RenamedLabel[L, A, A]) { + def swap: RenamedLabel[L, A, A] = RenamedLabel.unsafe(renamed.to, renamed.from) + } + + /** + * A lambda definition, namely an anonymous function taking some arguments and returning a result. + * Arguments are represented as holes, thus the body of the function is known at runtime. + */ + protected abstract class LambdaDefinition[N <: Arity, S <: SchematicLabel & WithArity[?], T <: LabeledTree[?]] extends WithArity[N] { + type U <: LabeledTree[? >: S] + + val parameters: Seq[S] + val body: T + + def apply(args: FillArgs[U, N]): T = unsafe(args.toSeq) + def unsafe(args: Seq[U]): T = { + require(args.size == arity) + instantiate(args) + } + protected def instantiate(args: Seq[U]): T + + override val arity: N = parameters.size.asInstanceOf[N] + + require(parameters.forall(_.arity == 0)) + require(parameters.distinct.size == parameters.size) + } + + /** + * Represents the instantiation of a schema. + */ + protected abstract class AssignedSchema[R <: SchematicLabel & WithArity[?], S <: SchematicLabel & WithArity[?]] { + type L <: LambdaDefinition[?, S, ? <: LabeledTree[? >: R]] + + val schema: R + val lambda: L + + require(schema.arity == lambda.arity) + } + +} diff --git a/lisa-front/src/main/scala/lisa/front/fol/utils/FormulaUtils.scala b/lisa-front/src/main/scala/lisa/front/fol/utils/FormulaUtils.scala new file mode 100644 index 0000000000000000000000000000000000000000..342a8596c84732b0e39d79850e4d65c901e08aeb --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/fol/utils/FormulaUtils.scala @@ -0,0 +1,290 @@ +package lisa.front.fol.utils + +import lisa.front.fol.conversions.to.FormulaConversionsTo +import lisa.front.fol.definitions.FormulaDefinitions +import lisa.front.fol.ops.FormulaOps + +trait FormulaUtils extends TermUtils with FormulaDefinitions with FormulaConversionsTo with FormulaOps { + + type RenamedPredicate[B <: PredicateLabel[?]] = RenamedLabel[PredicateLabel[?], SchematicPredicateLabel[?], B] + type RenamedPredicateSchema = RenamedPredicate[SchematicPredicateLabel[?]] + extension [L <: PredicateLabel[?]](renamed: RenamedPredicate[L]) { + def toAssignment: AssignedPredicate = { + val parameters = freshIds(Set.empty, renamed.from.arity).map(SchematicTermLabel.apply[0]) + AssignedPredicate.unsafe(renamed.from, LambdaPredicate.unsafe(parameters, PredicateFormula.unsafe(renamed.to, parameters.map(Term.unsafe(_, Seq.empty))))) + } + } + type RenamedConnector[B <: ConnectorLabel[?]] = RenamedLabel[ConnectorLabel[?], SchematicConnectorLabel[?], B] + type RenamedConnectorSchema = RenamedConnector[SchematicConnectorLabel[?]] + extension [L <: ConnectorLabel[?]](renamed: RenamedConnector[L]) { + def toAssignment: AssignedConnector = { + val parameters = freshIds(Set.empty, renamed.from.arity).map(SchematicPredicateLabel.apply[0]) + AssignedConnector.unsafe(renamed.from, LambdaConnector.unsafe(parameters, ConnectorFormula.unsafe(renamed.to, parameters.map(PredicateFormula.unsafe(_, Seq.empty))))) + } + } + + case class LambdaPredicate[N <: Arity] private (parameters: Seq[SchematicTermLabel[0]], body: Formula) extends LambdaDefinition[N, SchematicTermLabel[0], Formula] { + override type U = Term + override protected def instantiate(args: Seq[Term]): Formula = + instantiateFormulaSchemas(body, functions = parameters.zip(args).map { case (from, to) => AssignedFunction(from, LambdaFunction(_ => to)) }) + } + object LambdaPredicate { + def apply[N <: Arity](f: FillArgs[SchematicTermLabel[0], N] => Formula)(using v: ValueOf[N]): LambdaPredicate[N] = { + val n = v.value + val dummyVariable = SchematicTermLabel[0]("") + val taken = freeSchematicTermLabelsOf(fillTupleParameters(_ => dummyVariable, n, f)._2).map(_.id) + val (params, body) = fillTupleParameters(SchematicTermLabel.apply[0](_), n, f, taken) + new LambdaPredicate(params.toSeq, body) + } + def apply(f: Formula): LambdaPredicate[0] = LambdaPredicate(Seq.empty, f) + def unsafe(parameters: Seq[SchematicTermLabel[0]], body: Formula): LambdaPredicate[?] = new LambdaPredicate(parameters, body) + } + + case class AssignedPredicate private (schema: SchematicPredicateLabel[?], lambda: LambdaPredicate[?]) extends AssignedSchema[SchematicPredicateLabel[?], SchematicTermLabel[0]] { + override type L = LambdaPredicate[?] + } + object AssignedPredicate { + def apply[N <: Arity](schema: SchematicPredicateLabel[N], lambda: LambdaPredicate[N])(using v: ValueOf[N]): AssignedPredicate = new AssignedPredicate(schema, lambda) + def unsafe(schema: SchematicPredicateLabel[?], lambda: LambdaPredicate[?]): AssignedPredicate = new AssignedPredicate(schema, lambda) + } + given Conversion[Formula, LambdaPredicate[0]] = LambdaPredicate.apply + given labelToLambdaPredicate[T](using Conversion[T, Formula]): Conversion[T, LambdaPredicate[0]] = (t: T) => { + val formula: Formula = t + formula + } + // given lambdaToLambdaPredicate[N <: Arity](using ValueOf[N]): Conversion[FillArgs[SchematicTermLabel[0], N] => Formula, LambdaPredicate[N]] = LambdaPredicate.apply + given lambdaToLambdaPredicate1: Conversion[SchematicTermLabel[0] => Formula, LambdaPredicate[1]] = LambdaPredicate.apply + given lambdaToLambdaPredicate2: Conversion[((SchematicTermLabel[0], SchematicTermLabel[0])) => Formula, LambdaPredicate[2]] = LambdaPredicate.apply + + case class LambdaConnector[N <: Arity] private (parameters: Seq[SchematicPredicateLabel[0]], body: Formula) extends LambdaDefinition[N, SchematicPredicateLabel[0], Formula] { + override type U = Formula + override protected def instantiate(args: Seq[Formula]): Formula = + instantiateFormulaSchemas(body, predicates = parameters.zip(args).map { case (from, to) => AssignedPredicate(from, LambdaPredicate(_ => to)) }) + } + object LambdaConnector { + def apply[N <: Arity](f: FillArgs[SchematicPredicateLabel[0], N] => Formula)(using v: ValueOf[N]): LambdaConnector[N] = { + val n = v.value + val dummyVariable = SchematicPredicateLabel[0]("") + val taken = schematicPredicatesOf(fillTupleParameters(_ => dummyVariable, n, f)._2).map(_.id) + val (params, body) = fillTupleParameters(SchematicPredicateLabel.apply[0](_), n, f, taken) + new LambdaConnector(params.toSeq, body) + } + def apply(f: Formula): LambdaConnector[0] = LambdaConnector(Seq.empty, f) + def unsafe(parameters: Seq[SchematicPredicateLabel[0]], body: Formula): LambdaConnector[?] = new LambdaConnector(parameters, body) + } + given Conversion[Formula, LambdaConnector[0]] = LambdaConnector.apply + given labelToLambdaConnector[T](using Conversion[T, Formula]): Conversion[T, LambdaConnector[0]] = LambdaConnector.apply + given lambdaToLambdaConnector1: Conversion[SchematicPredicateLabel[0] => Formula, LambdaConnector[1]] = LambdaConnector.apply + given lambdaToLambdaConnector2: Conversion[((SchematicPredicateLabel[0], SchematicPredicateLabel[0])) => Formula, LambdaConnector[2]] = LambdaConnector.apply + + case class AssignedConnector private (schema: SchematicConnectorLabel[?], lambda: LambdaConnector[?]) extends AssignedSchema[SchematicConnectorLabel[?], SchematicPredicateLabel[0]] { + override type L = LambdaConnector[?] + } + object AssignedConnector { + def apply[N <: Arity](schema: SchematicConnectorLabel[N], lambda: LambdaConnector[N])(using v: ValueOf[N]): AssignedConnector = new AssignedConnector(schema, lambda) + def unsafe(schema: SchematicConnectorLabel[?], lambda: LambdaConnector[?]): AssignedConnector = new AssignedConnector(schema, lambda) + } + + object Assigned { + def apply[N <: Arity](schema: SchematicTermLabel[N], lambda: LambdaFunction[N])(using v: ValueOf[N]): AssignedFunction = AssignedFunction(schema, lambda) + def apply[N <: Arity](schema: SchematicPredicateLabel[N], lambda: LambdaPredicate[N])(using v: ValueOf[N]): AssignedPredicate = AssignedPredicate(schema, lambda) + def apply[N <: Arity](schema: SchematicConnectorLabel[N], lambda: LambdaConnector[N])(using v: ValueOf[N]): AssignedConnector = AssignedConnector(schema, lambda) + } + + def toKernel(lambda: LambdaPredicate[?]): lisa.kernel.fol.FOL.LambdaTermFormula = + lisa.kernel.fol.FOL.LambdaTermFormula( + lambda.parameters.map((label: SchematicTermLabel[0]) => { + val r = toKernel(label) + r + }), + lambda.body + ) + given Conversion[LambdaPredicate[?], lisa.kernel.fol.FOL.LambdaTermFormula] = toKernel + + def toKernel(lambda: LambdaConnector[?]): lisa.kernel.fol.FOL.LambdaFormulaFormula = + lisa.kernel.fol.FOL.LambdaFormulaFormula(lambda.parameters.map(toKernel), lambda.body) + given Conversion[LambdaConnector[?], lisa.kernel.fol.FOL.LambdaFormulaFormula] = toKernel + + /** + * A simple procedure to handle the fact that the kernel does not support schematic connectors. + * These will be converted into fresh constant connectors, which can be considered equivalent when used in some context (e.g. equivalence checking). + * @param formulas the formulas to be adapted + * @return new formulas that have been adapted + */ + def adaptConnectorSchemas(formulas: IndexedSeq[Formula]): IndexedSeq[Formula] = { + def recursive( + formula: Formula, + predicates: Set[SchematicPredicateLabel[?]], + translation: Map[ConnectorFormula, SchematicPredicateLabel[?]] + ): (Formula, Set[SchematicPredicateLabel[?]], Map[ConnectorFormula, SchematicPredicateLabel[?]]) = formula match { + case other: PredicateFormula => (other, predicates, translation) + case connector @ ConnectorFormula(label, args) => + label match { + case schematic: SchematicConnectorLabel[?] => + translation.get(connector) match { + case Some(predicate) => (PredicateFormula.unsafe(predicate, Seq.empty), predicates, translation) + case None => + val newId = freshId(predicates.map(_.id), schematic.id) + val newLabel = SchematicPredicateLabel[0](newId) + (PredicateFormula.unsafe(newLabel, Seq.empty), predicates + newLabel, translation + (connector -> newLabel)) + } + case _ => + val (newFormulas, newAllPredicates, newAllTranslation) = args.foldLeft((Seq.empty[Formula], predicates, translation)) { case ((acc, accPredicates, accTranslation), arg) => + val (newFormula, np, nt) = recursive(arg, accPredicates, accTranslation) + (acc :+ newFormula, np, nt) + } + (ConnectorFormula.unsafe(label, newFormulas), newAllPredicates, newAllTranslation) + } + case BinderFormula(label, bound, inner) => + val (newInner, newPredicates, newTranslation) = recursive(inner, predicates, translation) + (BinderFormula(label, bound, newInner), newPredicates, newTranslation) + } + val schematicPredicates = formulas.flatMap(schematicPredicatesOf).toSet + val (translatedFormulas, _, _) = formulas.foldLeft((IndexedSeq.empty[Formula], schematicPredicates, Map.empty[ConnectorFormula, SchematicPredicateLabel[?]])) { + case ((acc, taken, currentTranslation), formula) => + val (translatedFormula, newTaken, newTranslation) = recursive(formula, taken, currentTranslation) + (acc :+ translatedFormula, newTaken, newTranslation) + } + translatedFormulas + } + + /** + * @see [[lisa.kernel.fol.FOL.isSame]] + */ + def isSame(f1: Formula, f2: Formula): Boolean = + adaptConnectorSchemas(IndexedSeq(f1, f2)) match { + case IndexedSeq(af1, af2) => + lisa.kernel.fol.FOL.isSame(af1, af2) + case e => throw new MatchError(e) + } + + /** + * @see [[lisa.kernel.fol.FOL.Formula#freeVariables]] + */ + def freeVariablesOf(formula: Formula): Set[VariableLabel] = formula match { + case PredicateFormula(_, args) => args.flatMap(freeVariablesOf).toSet + case ConnectorFormula(_, args) => args.flatMap(freeVariablesOf).toSet + case BinderFormula(_, bound, inner) => freeVariablesOf(inner) - bound + } + + def freeTermLabelsOf(formula: Formula): Set[TermLabel[?]] = formula match { + case PredicateFormula(_, args) => args.flatMap(termLabelsOf).toSet + case ConnectorFormula(_, args) => args.flatMap(termLabelsOf).toSet + case BinderFormula(_, bound, inner) => termLabelsOf(inner) - bound + } + def termLabelsOf(formula: Formula): Set[TermLabel[?]] = formula match { + case PredicateFormula(_, args) => args.flatMap(termLabelsOf).toSet + case ConnectorFormula(_, args) => args.flatMap(termLabelsOf).toSet + case BinderFormula(_, bound, inner) => termLabelsOf(inner) + } + + /** + * @see [[lisa.kernel.fol.FOL.Formula#schematicFunctions]] + */ + def freeSchematicTermLabelsOf(formula: Formula): Set[SchematicTermLabel[?]] = + freeTermLabelsOf(formula).collect { case schematic: SchematicTermLabel[?] => schematic } + + def schematicTermLabelsOf(formula: Formula): Set[SchematicTermLabel[?]] = + termLabelsOf(formula).collect { case schematic: SchematicTermLabel[?] => schematic } + + def predicatesOf(formula: Formula): Set[PredicateLabel[?]] = formula match { + case PredicateFormula(label, _) => Set(label) + case ConnectorFormula(_, args) => args.flatMap(predicatesOf).toSet + case BinderFormula(_, _, inner) => predicatesOf(inner) + } + + /** + * @see [[lisa.kernel.fol.FOL.Formula#schematicPredicates]] + */ + def schematicPredicatesOf(formula: Formula): Set[SchematicPredicateLabel[?]] = + predicatesOf(formula).collect { case schematic: SchematicPredicateLabel[?] => schematic } + + def schematicConnectorsOf(formula: Formula): Set[SchematicConnectorLabel[?]] = formula match { + case PredicateFormula(_, _) => Set.empty + case ConnectorFormula(label, args) => + val set = label match { + case _: ConstantConnectorLabel[?] => Set.empty + case schematic: SchematicConnectorLabel[?] => Set(schematic) + } + set ++ args.flatMap(schematicConnectorsOf) + case BinderFormula(_, _, inner) => schematicConnectorsOf(inner) + } + + def declaredBoundVariablesOf(formula: Formula): Set[VariableLabel] = formula match { + case PredicateFormula(_, _) => Set.empty + case ConnectorFormula(_, args) => args.flatMap(declaredBoundVariablesOf).toSet + case BinderFormula(_, bound, inner) => declaredBoundVariablesOf(inner) + bound + } + + protected def isFormulaWellFormed(formula: Formula)(using ctx: Scope): Boolean = formula match { + case PredicateFormula(label, args) => args.forall(isWellFormed) + case ConnectorFormula(_: SchematicConnectorLabel[?], Seq()) => false // Use nullary predicates instead + case ConnectorFormula(label, args) => args.forall(isFormulaWellFormed) + case BinderFormula(_, bound, inner) => + !ctx.boundVariables.contains(bound) && isFormulaWellFormed(inner)(using ctx.copy(boundVariables = ctx.boundVariables + bound)) + } + + def isWellFormed(formula: Formula): Boolean = isFormulaWellFormed(formula)(using Scope()) + + def substituteVariables(formula: Formula, map: Map[VariableLabel, Term]): Formula = formula match { + case PredicateFormula(label, args) => PredicateFormula.unsafe(label, args.map(substituteVariables(_, map))) + case ConnectorFormula(label, args) => ConnectorFormula.unsafe(label, args.map(substituteVariables(_, map))) + case BinderFormula(label, bound, inner) => + val newSubst = map - bound + val fv = map.values.flatMap(freeVariablesOf).toSet + if (fv.contains(bound)) { + val newBoundVariable = VariableLabel(freshId(fv.map(_.id), bound.id)) + val newInner = substituteVariables(inner, Map(bound -> VariableTerm(newBoundVariable))) + BinderFormula(label, newBoundVariable, substituteVariables(newInner, newSubst)) + } else { + BinderFormula(label, bound, substituteVariables(inner, newSubst)) + } + } + + def instantiateFormulaSchemas( + formula: Formula, + functions: Seq[AssignedFunction] = Seq.empty, + predicates: Seq[AssignedPredicate] = Seq.empty, + connectors: Seq[AssignedConnector] = Seq.empty + ): Formula = { + val predicatesMap: Map[SchematicPredicateLabel[?], LambdaPredicate[?]] = predicates.map(i => i.schema -> i.lambda).toMap + val connectorsMap: Map[SchematicConnectorLabel[?], LambdaConnector[?]] = connectors.map(i => i.schema -> i.lambda).toMap + def instantiateInternal(formula: Formula): Formula = formula match { + case PredicateFormula(label, args) => + lazy val newArgs = args.map(instantiateTermSchemas(_, functions)) + label match { + case f: SchematicPredicateLabel[?] if predicatesMap.contains(f) => predicatesMap(f).unsafe(newArgs) + case _ => PredicateFormula.unsafe(label, newArgs) + } + case ConnectorFormula(label, args) => + lazy val newArgs = args.map(instantiateInternal) + label match { + case f: SchematicConnectorLabel[?] if connectorsMap.contains(f) => connectorsMap(f).unsafe(newArgs) + case _ => ConnectorFormula.unsafe(label, newArgs) + } + case BinderFormula(label, bound, inner) => + // TODO Requires testing. Match against substituteVariables + val newFuns = functions.filterNot(i => i.schema == bound) + val fv = newFuns.flatMap(f => freeVariablesOf(f.lambda.body)).toSet + if (fv.contains(bound)) { + val newBoundVariable = VariableLabel(freshId(fv.map(_.id), bound.id)) + val newInner = substituteVariables(inner, Map(bound -> VariableTerm(newBoundVariable))) + BinderFormula(label, newBoundVariable, instantiateFormulaSchemas(newInner, newFuns, predicates, connectors)) + } else { + BinderFormula(label, bound, instantiateFormulaSchemas(inner, newFuns, predicates, connectors)) + } + } + instantiateInternal(formula) + } + + def unsafeRenameVariables(formula: Formula, map: Map[VariableLabel, VariableLabel]): Formula = formula match { + case PredicateFormula(label, args) => + PredicateFormula.unsafe(label, args.map(unsafeRenameVariables(_, map))) + case ConnectorFormula(label, args) => + ConnectorFormula.unsafe(label, args.map(unsafeRenameVariables(_, map))) + case BinderFormula(label, bound, inner) => + val newBound = map.getOrElse(bound, bound) + BinderFormula(label, newBound, unsafeRenameVariables(inner, map)) + } + +} diff --git a/lisa-front/src/main/scala/lisa/front/fol/utils/TermUtils.scala b/lisa-front/src/main/scala/lisa/front/fol/utils/TermUtils.scala new file mode 100644 index 0000000000000000000000000000000000000000..fab52e69caa34093bafd6ca3004a5a51c54b429b --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/fol/utils/TermUtils.scala @@ -0,0 +1,109 @@ +package lisa.front.fol.utils + +import lisa.front.fol.conversions.to.TermConversionsTo +import lisa.front.fol.definitions.TermDefinitions +import lisa.front.fol.ops.CommonOps + +import scala.annotation.targetName + +trait TermUtils extends TermDefinitions with TermConversionsTo with CommonOps with CommonUtils { + + type RenamedFunction[B <: TermLabel[?]] = RenamedLabel[TermLabel[?], SchematicTermLabel[?], B] + type RenamedFunctionSchema = RenamedFunction[SchematicTermLabel[?]] + + extension [L <: TermLabel[?]](renamed: RenamedFunction[L]) { + def toAssignment: AssignedFunction = { + val parameters = freshIds(Set.empty, renamed.from.arity).map(SchematicTermLabel.apply[0]) + AssignedFunction.unsafe(renamed.from, LambdaFunction.unsafe(parameters, Term.unsafe(renamed.to, parameters.map(Term.unsafe(_, Seq.empty))))) + } + } + + def toKernel(lambda: LambdaFunction[?]): lisa.kernel.fol.FOL.LambdaTermTerm = + lisa.kernel.fol.FOL.LambdaTermTerm(lambda.parameters.map(toKernel), lambda.body) + given Conversion[LambdaFunction[?], lisa.kernel.fol.FOL.LambdaTermTerm] = toKernel + + case class LambdaFunction[N <: Arity] private (parameters: Seq[SchematicTermLabel[0]], body: Term) extends LambdaDefinition[N, SchematicTermLabel[0], Term] { + override type U = Term + override protected def instantiate(args: Seq[Term]): Term = + instantiateTermSchemas(body, parameters.zip(args).map { case (from, to) => AssignedFunction(from, LambdaFunction(_ => to)) }) + } + object LambdaFunction { + def apply[N <: Arity](f: FillArgs[SchematicTermLabel[0], N] => Term)(using v: ValueOf[N]): LambdaFunction[N] = { + val n = v.value + val dummyVariable = SchematicTermLabel[0]("") // Used to identify the existing free variables, doesn't matter if this name collides + val taken = schematicTermLabelsOf(fillTupleParameters(_ => dummyVariable, n, f)._2).map(_.id) + val (params, body) = fillTupleParameters(SchematicTermLabel.apply[0](_), n, f, taken) + new LambdaFunction(params.toSeq, body) + } + def apply(t: Term): LambdaFunction[0] = LambdaFunction(Seq.empty, t) + def unsafe(parameters: Seq[SchematicTermLabel[0]], body: Term): LambdaFunction[?] = new LambdaFunction(parameters, body) + } + + case class AssignedFunction private (schema: SchematicTermLabel[?], lambda: LambdaFunction[?]) extends AssignedSchema[SchematicTermLabel[?], SchematicTermLabel[0]] { + override type L = LambdaFunction[?] + } + object AssignedFunction { + def apply[N <: Arity](schema: SchematicTermLabel[N], lambda: LambdaFunction[N])(using v: ValueOf[N]): AssignedFunction = new AssignedFunction(schema, lambda) + def unsafe(schema: SchematicTermLabel[?], lambda: LambdaFunction[?]): AssignedFunction = new AssignedFunction(schema, lambda) + } + + given Conversion[Term, LambdaFunction[0]] = LambdaFunction.apply + given labelToLambdaFunction[T](using Conversion[T, Term]): Conversion[T, LambdaFunction[0]] = LambdaFunction.apply + given lambdaToLambdaFunction1: Conversion[SchematicTermLabel[0] => Term, LambdaFunction[1]] = LambdaFunction.apply + given lambdaToLambdaFunction2: Conversion[((SchematicTermLabel[0], SchematicTermLabel[0])) => Term, LambdaFunction[2]] = LambdaFunction.apply + + /** + * @see [[lisa.kernel.fol.FOL.isSame]] + */ + def isSame(t1: Term, t2: Term): Boolean = + lisa.kernel.fol.FOL.isSame(t1, t2) + + /** + * @see [[lisa.kernel.fol.FOL.Term#freeVariables]] + */ + def freeVariablesOf(term: Term): Set[VariableLabel] = term match { + case VariableTerm(label) => Set(label) + case Term(label, args) => args.flatMap(freeVariablesOf).toSet + } + + def termLabelsOf(term: Term): Set[TermLabel[?]] = term match { + case Term(label, args) => args.flatMap(termLabelsOf).toSet + label + } + + /** + * @see [[lisa.kernel.fol.FOL.Term#schematicFunctions]] + */ + def schematicTermLabelsOf(term: Term): Set[SchematicTermLabel[?]] = + termLabelsOf(term).collect { case schematic: SchematicTermLabel[?] => schematic } + + protected case class Scope(boundVariables: Set[VariableLabel] = Set.empty) + + /** + * Checks whether a term is well-formed. Currently returns <code>true</code> at all times. + * @param term the term to check + * @return if it is well-formed + */ + def isWellFormed(term: Term): Boolean = true + + def substituteVariables(term: Term, map: Map[VariableLabel, Term]): Term = term match { + case VariableTerm(label) => map.getOrElse(label, term) + case Term(label, args) => Term.unsafe(label, args.map(substituteVariables(_, map))) + } + + def instantiateTermSchemas(term: Term, functions: Seq[AssignedFunction]): Term = { + val map: Map[SchematicTermLabel[?], LambdaFunction[?]] = functions.map(i => i.schema -> i.lambda).toMap + def instantiateInternal(term: Term): Term = term match { + case Term(label, args) => + lazy val newArgs = args.map(instantiateInternal) + label match { + case f: SchematicTermLabel[?] if map.contains(f) => map(f).unsafe(newArgs) + case _ => Term.unsafe(label, newArgs) + } + } + instantiateInternal(term) + } + + def unsafeRenameVariables(term: Term, map: Map[VariableLabel, VariableLabel]): Term = + substituteVariables(term, map.view.mapValues(VariableTerm.apply).toMap) + +} diff --git a/lisa-front/src/main/scala/lisa/front/package.scala b/lisa-front/src/main/scala/lisa/front/package.scala new file mode 100644 index 0000000000000000000000000000000000000000..e5b62a2a3dfacd2e9836b1ba6c4737b09af84bdc --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/package.scala @@ -0,0 +1,7 @@ +package lisa.front + +export lisa.front.fol.FOL.{_, given} +export lisa.front.proof.Proof.{_, given} +export lisa.front.parser.FrontReader.* +export lisa.front.parser.FrontMacro.{_, given} +export lisa.front.printer.FrontPositionedPrinter.* diff --git a/lisa-front/src/main/scala/lisa/front/parser/FrontLexer.scala b/lisa-front/src/main/scala/lisa/front/parser/FrontLexer.scala new file mode 100644 index 0000000000000000000000000000000000000000..dbdd78045c4ecda91850f3ddae42eea5ab725d61 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/parser/FrontLexer.scala @@ -0,0 +1,174 @@ +package lisa.front.parser + +import lisa.front.parser.FrontReadingException.LexingException +import lisa.front.parser.FrontSymbols +import lisa.front.parser.FrontToken +import lisa.front.parser.FrontToken.* + +import scala.util.matching.Regex +import scala.util.parsing.combinator.RegexParsers + +private trait FrontLexer extends RegexParsers { + + override def skipWhitespace: Boolean = true + override protected val whiteSpace: Regex = "[ \t\f]+".r + + protected val S: FrontSymbols + + protected def initialIndentation: Parser[InitialIndentation] = positioned( + " *".r ^^ (str => InitialIndentation(str.length)) + ) + protected def newLine: Parser[NewLineWithIndentation] = positioned( + "\r?\n *".r ^^ (str => NewLineWithIndentation(str.count(_ == ' '))) + ) + + private val identifierPattern = "[a-zA-Z_][a-zA-Z0-9_]*" + + private def identifier: Parser[Identifier] = positioned( + identifierPattern.r ^^ (str => Identifier(str)) + ) + private def schematicIdentifier: Parser[SchematicIdentifier] = positioned( + (raw"\${S.QuestionMark}$identifierPattern").r ^^ (str => SchematicIdentifier(str.tail)) + ) + private def schematicConnectorIdentifier: Parser[SchematicConnectorIdentifier] = positioned( + (raw"\${S.QuestionMark}\${S.QuestionMark}$identifierPattern").r ^^ (str => SchematicConnectorIdentifier(str.tail.tail)) + ) + + private def keywords: Parser[FrontToken] = positioned( + S.Forall ^^^ Forall() + | S.ExistsOne ^^^ ExistsOne() + | S.Exists ^^^ Exists() + | S.Iff ^^^ Iff() + | S.Implies ^^^ Implies() + | S.Or ^^^ Or() + | S.And ^^^ And() + | S.Exclamation ^^^ Not() + | S.Turnstile ^^^ Turnstile() + | S.Ellipsis ^^^ Ellipsis() + | S.Subset ^^^ Subset() + | S.Membership ^^^ Membership() + | S.EmptySet ^^^ EmptySet() + | S.Equal ^^^ Equal() + | S.Tilde ^^^ SameCardinality() + | S.Backslash ^^^ LocalBinder() + | S.CurlyBracketOpen ^^^ CurlyBracketOpen() + | S.CurlyBracketClose ^^^ CurlyBracketClose() + | S.ParenthesisOpen ^^^ ParenthesisOpen() + | S.ParenthesisClose ^^^ ParenthesisClose() + | S.Dot ^^^ Dot() + | S.Comma ^^^ Comma() + | S.Semicolon ^^^ Semicolon() + ) + + protected final def standardTokens: Parser[FrontToken] = + keywords | newLine | schematicConnectorIdentifier | schematicIdentifier | identifier + + // Order matters! Special keywords should be matched before identifiers + protected def tokens: Parser[Seq[FrontToken]] = + phrase(initialIndentation ~ rep(standardTokens) ^^ { case h ~ t => h +: t }) + + final def apply(str: String): Seq[FrontToken] = + parse(tokens, str) match { + case e @ NoSuccess(msg, next) => throw LexingException(e.toString, next.pos) + case Success(result, next) => result + case e => throw new MatchError(e) + } +} + +/** + * The lexer converts a sequence of characters into low-level tokens ([[FrontToken]]), for instance identifiers, symbols, separators. + */ +object FrontLexer { + + private trait FrontLexerExtended extends FrontLexer { + private val kernelRuleIdentifiers = KernelRuleIdentifiers(S) + import kernelRuleIdentifiers.* + + private def integerLiteral: Parser[IntegerLiteral] = positioned( + "0|-?[1-9][0-9]*".r ^^ { str => IntegerLiteral(str.toInt) } + ) + + private def rules: Parser[RuleName] = + positioned( + (Hypothesis + | Cut + | Rewrite + | Weakening + | LeftAnd + | RightAnd + | LeftOr + | RightOr + | LeftImplies + | RightImplies + | LeftIff + | RightIff + | LeftNot + | RightNot + | LeftForall + | RightForall + | LeftExists + | RightExists + | LeftExistsOne + | RightExistsOne + | LeftRefl + | RightRefl + | LeftSubstEq + | RightSubstEq + | LeftSubstIff + | RightSubstIff + | FunInstantiation + | PredInstantiation + | SubproofHidden // Must come before `SubproofShown` + | SubproofShown // + | Import) ^^ RuleName.apply + ) + + override protected def tokens: Parser[Seq[FrontToken]] = + phrase(initialIndentation ~ rep(rules | integerLiteral | (S.SquareBracketOpen ^^^ SquareBracketOpen()) | (S.SquareBracketClose ^^^ SquareBracketClose()) | standardTokens) ^^ { case h ~ t => + h +: t + }) + } + + private trait FrontLexerAscii extends FrontLexer { + override protected val S: FrontSymbols = FrontSymbols.FrontAsciiSymbols + } + private object FrontLexerStandardAscii extends FrontLexerAscii + + private trait FrontLexerUnicode extends FrontLexer { + override protected val S: FrontSymbols = FrontSymbols.FrontUnicodeSymbols + } + private object FrontLexerStandardUnicode extends FrontLexerUnicode + private object FrontLexerExtendedUnicode extends FrontLexerUnicode with FrontLexerExtended // Order of inheritance matter + + private def postProcessor(lines: Boolean, indentation: Boolean)(tokens: Seq[FrontToken]): Seq[FrontToken] = { + val tokensWithEnd = tokens :+ End() + tokensWithEnd.flatMap { + case token @ NewLineWithIndentation(n) => + val tokenLine = NewLine() + tokenLine.pos = token.pos + val tokenIndentation = Indentation(n) + tokenIndentation.pos = token.pos + if (indentation) + Seq(tokenLine, tokenIndentation) + else if (lines) + Seq(tokenLine) + else + Seq.empty + case token @ InitialIndentation(n) => + val newToken = Indentation(n) + newToken.pos = token.pos + if (indentation) Seq(newToken) else Seq.empty + case other => Seq(other) + } + } + + def lexingAscii(str: String, lines: Boolean = false, indentation: Boolean = false): Seq[FrontToken] = + postProcessor(lines, indentation)(FrontLexerStandardAscii(str)) + + def lexingUnicode(str: String, lines: Boolean = false, indentation: Boolean = false): Seq[FrontToken] = + postProcessor(lines, indentation)(FrontLexerStandardUnicode(str)) + + def lexingExtendedUnicode(str: String): Seq[FrontToken] = + postProcessor(lines = true, indentation = true)(FrontLexerExtendedUnicode(str)) + +} diff --git a/lisa-front/src/main/scala/lisa/front/parser/FrontMacro.scala b/lisa-front/src/main/scala/lisa/front/parser/FrontMacro.scala new file mode 100644 index 0000000000000000000000000000000000000000..dbca0a4a60f6fb8cd49f63211b099d069d44027f --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/parser/FrontMacro.scala @@ -0,0 +1,431 @@ +package lisa.front.parser + +import lisa.front.fol.FOL.* +import lisa.front.printer.FrontPositionedPrinter +import lisa.front.proof.Proof.* + +// Note: on Intellij you may want to disable syntax highlighting for this file +// ("File Types" => "Text" => "Ignored Files and Folders", add "FrontMacro.scala") + +/** + * Macros to enable compile-time string interpolation. For instance: + * <pre> + * val f: Formula = ... + * formula"|- (a /\ $f); ?c" + * </pre> + */ +object FrontMacro { + import scala.quoted.* + + // https://github.com/lampepfl/dotty/issues/8577#issuecomment-1014729373 + + extension (inline sc: StringContext) { + transparent inline def term: Any = ${ SIParts.scMacro[TermParts]('sc) } + transparent inline def formula: Any = ${ SIParts.scMacro[FormulaParts]('sc) } + transparent inline def sequent: Any = ${ SIParts.scMacro[SequentParts]('sc) } + transparent inline def partial: Any = ${ SIParts.scMacro[PartialSequentParts]('sc) } + } + + class TermParts[P <: Tuple](parts: P) { + transparent inline def apply(inline args: Any*): Term = ${ termApplyMacro('parts, 'args) } + // transparent inline def unapplySeq(inline arg: Any): Option[Seq[Any]] = ${ termUnapplyMacro('parts, 'arg) } + } + class FormulaParts[P <: Tuple](parts: P) { + transparent inline def apply(inline args: Any*): Formula = ${ formulaApplyMacro('parts, 'args) } + } + class SequentParts[P <: Tuple](parts: P) { + transparent inline def apply(inline args: Any*): Sequent = ${ sequentApplyMacro('parts, 'args) } + } + class PartialSequentParts[P <: Tuple](parts: P) { + transparent inline def apply(inline args: Any*): PartialSequent = ${ partialSequentApplyMacro('parts, 'args) } + } + + trait SIParts[P <: Tuple](parts: P) + object SIParts { + def scMacro[SI[_ <: Tuple]](sc: Expr[StringContext])(using Quotes, Type[SI]): Expr[Any] = { + import quotes.reflect.* + val args = sc match { + case '{ StringContext(${ Varargs(args) }*) } => args + } + val tplExpr = Expr.ofTupleFromSeq(args) + val tplTpe = tplExpr.asTerm.tpe + val siTpe = TypeRepr.of[SI[Tuple]].asInstanceOf[TypeRepr & Matchable] match { + case AppliedType(siTpe, _) => siTpe + } + val siSym = siTpe.typeSymbol + val siTree = + New(TypeTree.of[SI[Tuple]]) + .select(siSym.primaryConstructor) + .appliedToType(tplTpe) + .appliedTo(tplExpr.asTerm) + siTree.asExpr + } + } + + /*private def termUnapplyMacro[P <: Tuple](parts: Expr[P], arg: Expr[Any])(using Quotes, Type[P]): Expr[Option[Seq[Any]]] = { + '{ None: Option[Seq[Term]] } + }*/ + + enum Variable { + val expr: Expr[Any] + case FunctionLabelVariable(expr: Expr[TermLabel[?]], placeholder: SchematicTermLabel[?]) + case PredicateLabelVariable(expr: Expr[PredicateLabel[?]], placeholder: SchematicPredicateLabel[?]) + case ConnectorLabelVariable(expr: Expr[ConnectorLabel[?]], placeholder: SchematicConnectorLabel[?]) + case VariableLabelVariable(expr: Expr[VariableLabel], placeholder: VariableLabel) + case TermVariable(expr: Expr[Term], placeholder: SchematicTermLabel[0]) + case FormulaVariable(expr: Expr[Formula], placeholder: SchematicPredicateLabel[0]) + } + import Variable.* + + case class Interpolator(idsAndVariables: Seq[(String, Variable)], tokens: Seq[FrontToken]) { + val variables: Seq[Variable] = idsAndVariables.map { case (_, variable) => variable } + val map: Map[String, Variable] = idsAndVariables.toMap + } + + private def toTokens[P <: Tuple](parts: Expr[P], args: Expr[Seq[Any]])(using Quotes, Type[P]): Interpolator = { + import quotes.reflect.{Term as _, *} + + // throw new Error(s"illegal interpolation variable: ${TypeTree.of[other]}") + // TypeTree[ConstantType(Constant({))] + def evaluateParts[Q <: Tuple](scrutiny: Type[Q], acc: Seq[String]): Seq[String] = scrutiny match { + case '[EmptyTuple] => acc + case '[head *: tail] => + val string = TypeTree.of[head].tpe.asInstanceOf[TypeRepr & Matchable] match { + case ConstantType(cst) => cst.value.asInstanceOf[String] // Should always match and succeed + } + evaluateParts(Type.of[tail], string +: acc) + } + // `Type.of[P]` is equivalent to `summon[Type[P]]` + val evaluatedParts: Seq[String] = evaluateParts(Type.of[P], Seq.empty).reverse + + val partsTokens: Seq[Seq[FrontToken]] = evaluatedParts.map(FrontLexer.lexingAscii(_)).map(_.init) + val takenNames: Set[String] = partsTokens.flatten.collect { + case FrontToken.Identifier(id) => id + case FrontToken.SchematicIdentifier(id) => id + case FrontToken.SchematicConnectorIdentifier(id) => id + }.toSet + + val argsSeq: Seq[Expr[Any]] = args match { + case Varargs(es) => es + } + + // TODO raise warning when using infix notation + + def resolveArity[N <: Arity](expr: Expr[LabelType & WithArityType[N]])(using Type[N]): Int = + TypeTree.of[N].tpe.asInstanceOf[TypeRepr & Matchable] match { + case ConstantType(cst) => cst.value.asInstanceOf[Int] + case _ => report.errorAndAbort(s"loosely typed label variable, the arity must be known at compile time: ${Type.show[N]}", expr) + } + + val idsAndVariables: Seq[(String, Variable)] = argsSeq.zipWithIndex + .foldLeft((Seq.empty[(String, Variable)], Map.empty[Any, String], takenNames)) { case ((acc, hashmap, taken), (expr, i)) => + val id = hashmap.getOrElse( + expr.asTerm.toString, { // FIXME: `asTerm.toString` is not a safe way to check whether two expressions are `=:=` + val base = s"x$i" + if (taken.contains(base)) freshId(taken, base) else base + } + ) + val variable = expr match { + case '{ $label: TermLabel[n] } => FunctionLabelVariable(label, SchematicTermLabel.unsafe(id, resolveArity(label))) + case '{ $label: PredicateLabel[n] } => PredicateLabelVariable(label, SchematicPredicateLabel.unsafe(id, resolveArity(label))) + case '{ $label: ConnectorLabel[n] } => ConnectorLabelVariable(label, SchematicConnectorLabel.unsafe(id, resolveArity(label))) + case '{ $label: VariableLabel } => VariableLabelVariable(label, VariableLabel(id)) + case '{ $term: Term } => TermVariable(term, SchematicTermLabel[0](id)) + case '{ $formula: Formula } => FormulaVariable(formula, SchematicPredicateLabel[0](id)) + case '{ $t: q } => report.errorAndAbort(s"unsupported variable type: ${Type.show[q]}", expr) + } + ((id, variable) +: acc, hashmap + (expr.asTerm.toString -> id), taken + id) + } + ._1 + .reverse + + val variables = idsAndVariables.map { case (_, variable) => variable } + + val variablesTokens: Seq[FrontToken] = variables.map { + case FunctionLabelVariable(_, placeholder) => FrontToken.SchematicIdentifier(placeholder.id) + case PredicateLabelVariable(_, placeholder) => FrontToken.SchematicIdentifier(placeholder.id) + case ConnectorLabelVariable(_, placeholder) => FrontToken.SchematicConnectorIdentifier(placeholder.id) + case VariableLabelVariable(_, placeholder) => FrontToken.Identifier(placeholder.id) + case TermVariable(_, placeholder) => FrontToken.SchematicIdentifier(placeholder.id) + case FormulaVariable(_, placeholder) => FrontToken.SchematicIdentifier(placeholder.id) + } + + val tokens: Seq[FrontToken] = partsTokens.head ++ variablesTokens.zip(partsTokens.tail).flatMap { case (v, p) => v +: p } :+ FrontToken.End() + + Interpolator(idsAndVariables, tokens) + } + + private def getRenaming(variables: Seq[Variable])(using Quotes): Expr[ + ( + Seq[AssignedFunction], + Seq[AssignedPredicate], + Seq[AssignedConnector], + Map[VariableLabel, VariableLabel] + ) + ] = { + import LiftFOL.{_, given} + + def substMap[T, U](seq: Seq[(Expr[T], Expr[U])])(using Quotes, Type[T], Type[U]): Expr[Map[T, U]] = { + val list: Seq[Expr[(T, U)]] = seq.map { case (k, v) => + '{ $k -> $v } + } + '{ ${ liftSeq(list) }.toMap } + } + + val functionsMap: Expr[Seq[AssignedFunction]] = liftSeq(variables.collect { case FunctionLabelVariable(label, placeholder) => + '{ RenamedLabel.unsafe(${ Expr(placeholder) }, $label).toAssignment } + }) + val predicatesMap: Expr[Seq[AssignedPredicate]] = liftSeq(variables.collect { case PredicateLabelVariable(label, placeholder) => + '{ RenamedLabel.unsafe(${ Expr(placeholder) }, $label).toAssignment } + }) + val connectorsMap: Expr[Seq[AssignedConnector]] = liftSeq(variables.collect { case ConnectorLabelVariable(label, placeholder) => + '{ RenamedLabel.unsafe(${ Expr(placeholder) }, $label).toAssignment } + }) + val variablesMap: Expr[Map[VariableLabel, VariableLabel]] = substMap(variables.collect { case VariableLabelVariable(label, placeholder) => + Expr(placeholder) -> label + }) + + val termsMap: Expr[Seq[AssignedFunction]] = liftSeq(variables.collect { case TermVariable(term, placeholder) => + '{ AssignedFunction.unsafe(${ Expr(placeholder)(using toExprFunction0) }, LambdaFunction.unsafe(Seq.empty, $term)) } + }) + val formulasMap: Expr[Seq[AssignedPredicate]] = liftSeq(variables.collect { case FormulaVariable(formula, placeholder) => + '{ AssignedPredicate.unsafe(${ Expr(placeholder)(using toExprPredicate0) }, LambdaPredicate.unsafe(Seq.empty, $formula)) } + }) + + '{ ($functionsMap ++ $termsMap, $predicatesMap ++ $formulasMap, $connectorsMap, $variablesMap) } + } + + def unsafeFixPointTermInstantiate(term: Term, functions: Seq[AssignedFunction], map: Map[VariableLabel, VariableLabel]): Term = { + val next = instantiateTermSchemas(unsafeRenameVariables(term, map), functions) + if (next == term) term else unsafeFixPointTermInstantiate(next, functions, map) + } + + def unsafeFixPointFormulaInstantiate( + formula: Formula, + functions: Seq[AssignedFunction], + predicates: Seq[AssignedPredicate], + connectors: Seq[AssignedConnector], + map: Map[VariableLabel, VariableLabel] + ): Formula = { + val next = instantiateFormulaSchemas(unsafeRenameVariables(formula, map), functions, predicates, connectors) + if (next == formula) formula else unsafeFixPointFormulaInstantiate(next, functions, predicates, connectors, map) + } + + private def typeCheck( + interpolator: Interpolator, + functions: Set[TermLabel[?]], + predicates: Set[PredicateLabel[?]], + connectors: Set[SchematicConnectorLabel[?]], + variables: Set[VariableLabel] + )(using Quotes): Unit = { + import quotes.reflect.* + + def reportArityMismatch(expr: Expr[?], expected: Int, actual: Int): Nothing = + report.errorAndAbort(s"arity mismatch: variable label expects $expected arguments but you provided $actual", expr) + + // Either function or predicate + functions.flatMap(f => interpolator.map.get(f.id).map(f -> _)).foreach { case (f, variable) => + variable match { + case FunctionLabelVariable(label, placeholder) => + if (f.arity != placeholder.arity) { + reportArityMismatch(label, placeholder.arity, f.arity) + } + case TermVariable(label, placeholder) => + if (f.arity != placeholder.arity) { + report.errorAndAbort("variable term does not expect any arguments", label) + } + case VariableLabelVariable(label, _) => report.errorAndAbort("undeclared free variable", label) + case other => report.errorAndAbort("expected term, got formula", other.expr) + } + } + // Ditto + predicates.flatMap(f => interpolator.map.get(f.id).map(f -> _)).foreach { case (f, variable) => + variable match { + case PredicateLabelVariable(label, placeholder) => + if (f.arity != placeholder.arity) { + reportArityMismatch(label, placeholder.arity, f.arity) + } + case FormulaVariable(label, placeholder) => + if (f.arity != placeholder.arity) { + report.errorAndAbort("variable formula does not expect any arguments", label) + } + case VariableLabelVariable(label, _) => report.errorAndAbort("undeclared free variable", label) + case other => report.errorAndAbort("expected formula, got term", other.expr) + } + } + // Connectors are disjoint from anything else + connectors.flatMap(f => interpolator.map.get(f.id).map(f -> _)).foreach { case (f, variable) => + variable match { + case ConnectorLabelVariable(label, placeholder) => + if (f.arity != placeholder.arity) { + reportArityMismatch(label, placeholder.arity, f.arity) + } + case other => throw new Error // Shouldn't happen + } + } + // Variable are also apart + variables.flatMap(f => interpolator.map.get(f.id).map(f -> _)).foreach { case (f, variable) => + variable match { + case VariableLabelVariable(_, _) => () + case other => report.errorAndAbort("expected term, got formula", other.expr) + } + } + } + + private def termApplyMacro[P <: Tuple](parts: Expr[P], args: Expr[Seq[Any]])(using Quotes, Type[P]): Expr[Term] = { + import quotes.reflect.* + import LiftFOL.{_, given} + + val interpolator = toTokens(parts, args) + val resolved = FrontResolver.resolveTerm(FrontParser.parseTopTermOrFormula(interpolator.tokens)) + + typeCheck(interpolator, termLabelsOf(resolved), Set.empty, Set.empty, freeVariablesOf(resolved)) + + '{ + val (functionsMap, _, _, variablesMap) = ${ getRenaming(interpolator.variables) } + unsafeFixPointTermInstantiate(${ Expr(resolved) }, functionsMap, variablesMap) + } + } + private def formulaApplyMacro[P <: Tuple](parts: Expr[P], args: Expr[Seq[Any]])(using Quotes, Type[P]): Expr[Formula] = { + import quotes.reflect.* + import LiftFOL.{_, given} + + val interpolator = toTokens(parts, args) + val resolved = FrontResolver.resolveFormula(FrontParser.parseTopTermOrFormula(interpolator.tokens)) + + typeCheck(interpolator, termLabelsOf(resolved), predicatesOf(resolved), schematicConnectorsOf(resolved), freeVariablesOf(resolved)) + + '{ + val (functionsMap, predicatesMap, connectorsMap, variablesMap) = ${ getRenaming(interpolator.variables) } + unsafeFixPointFormulaInstantiate(${ Expr(resolved) }, functionsMap, predicatesMap, connectorsMap, variablesMap) + } + } + private def sequentApplyMacro[P <: Tuple](parts: Expr[P], args: Expr[Seq[Any]])(using Quotes, Type[P]): Expr[Sequent] = { + import quotes.reflect.* + import LiftFOL.{_, given} + + val interpolator = toTokens(parts, args) + val resolved = FrontResolver.resolveSequent(FrontParser.parseSequent(interpolator.tokens)) + + typeCheck(interpolator, functionsOfSequent(resolved), predicatesOfSequent(resolved), schematicConnectorsOfSequent(resolved), freeVariablesOfSequent(resolved)) + + '{ + val (functionsMap, predicatesMap, connectorsMap, variablesMap) = ${ getRenaming(interpolator.variables) } + def rename(formula: Formula): Formula = + unsafeFixPointFormulaInstantiate(formula, functionsMap, predicatesMap, connectorsMap, variablesMap) + Sequent(${ liftSeq(resolved.left.toSeq.map(Expr.apply)) }.toIndexedSeq.map(rename), ${ liftSeq(resolved.right.toSeq.map(Expr.apply)) }.toIndexedSeq.map(rename)) + } + } + private def partialSequentApplyMacro[P <: Tuple](parts: Expr[P], args: Expr[Seq[Any]])(using Quotes, Type[P]): Expr[PartialSequent] = { + import quotes.reflect.* + import LiftFOL.{_, given} + + val interpolator = toTokens(parts, args) + val resolved = FrontResolver.resolvePartialSequent(FrontParser.parsePartialSequent(interpolator.tokens)) + + typeCheck(interpolator, functionsOfSequent(resolved), predicatesOfSequent(resolved), schematicConnectorsOfSequent(resolved), freeVariablesOfSequent(resolved)) + + '{ + val (functionsMap, predicatesMap, connectorsMap, variablesMap) = ${ getRenaming(interpolator.variables) } + def rename(formula: Formula): Formula = + unsafeFixPointFormulaInstantiate(formula, functionsMap, predicatesMap, connectorsMap, variablesMap) + PartialSequent( + ${ liftSeq(resolved.left.toSeq.map(Expr.apply)) }.toIndexedSeq.map(rename), + ${ liftSeq(resolved.right.toSeq.map(Expr.apply)) }.toIndexedSeq.map(rename), + ${ Expr(resolved.partialLeft) }, + ${ Expr(resolved.partialRight) } + ) + } + } + + private object LiftFOL { + def liftSeq[T](seq: Seq[Expr[T]])(using Quotes, Type[T]): Expr[Seq[T]] = + seq.foldRight('{ Seq.empty[T] })((e, acc) => '{ $e +: $acc }) + + // TODO support the generic type conversion (it's harder than it looks) + + given ToExpr[SchematicTermLabel[?]] with { + def apply(f: SchematicTermLabel[?])(using Quotes): Expr[SchematicTermLabel[?]] = + '{ SchematicTermLabel.unsafe(${ Expr(f.id) }, ${ Expr(f.arity.asInstanceOf[Int]) }) } + } + given ToExpr[ConstantFunctionLabel[?]] with { + def apply(f: ConstantFunctionLabel[?])(using Quotes): Expr[ConstantFunctionLabel[?]] = + '{ ConstantFunctionLabel.unsafe(${ Expr(f.id) }, ${ Expr(f.arity.asInstanceOf[Int]) }) } + } + given ToExpr[SchematicPredicateLabel[?]] with { + def apply(f: SchematicPredicateLabel[?])(using Quotes) = + '{ SchematicPredicateLabel.unsafe(${ Expr(f.id) }, ${ Expr(f.arity.asInstanceOf[Int]) }) } + } + given ToExpr[ConstantPredicateLabel[?]] with { + def apply(f: ConstantPredicateLabel[?])(using Quotes): Expr[ConstantPredicateLabel[?]] = + '{ ConstantPredicateLabel.unsafe(${ Expr(f.id) }, ${ Expr(f.arity.asInstanceOf[Int]) }) } + } + given ToExpr[SchematicConnectorLabel[?]] with { + def apply(f: SchematicConnectorLabel[?])(using Quotes) = + '{ SchematicConnectorLabel.unsafe(${ Expr(f.id) }, ${ Expr(f.arity.asInstanceOf[Int]) }) } + } + given ToExpr[VariableLabel] with { + def apply(l: VariableLabel)(using Quotes) = + '{ VariableLabel(${ Expr(l.id) }) } + } + given ToExpr[BinderLabel] with { + def apply(l: BinderLabel)(using Quotes) = + l match { + case `forall` => '{ forall } + case `exists` => '{ exists } + case `existsOne` => '{ existsOne } + } + } + + // FIXME "hack" otherwise the two givens would clash + val toExprFunction0: ToExpr[SchematicTermLabel[0]] = new { + def apply(f: SchematicTermLabel[0])(using Quotes): Expr[SchematicTermLabel[0]] = + '{ SchematicTermLabel[0](${ Expr(f.id) }) } + } + val toExprPredicate0: ToExpr[SchematicPredicateLabel[0]] = new { + def apply(f: SchematicPredicateLabel[0])(using Quotes): Expr[SchematicPredicateLabel[0]] = + '{ SchematicPredicateLabel[0](${ Expr(f.id) }) } + } + + given ToExpr[TermLabel[?]] with { + def apply(f: TermLabel[?])(using Quotes): Expr[TermLabel[?]] = f match { + case constant: ConstantFunctionLabel[?] => Expr(constant)(using summon[ToExpr[ConstantFunctionLabel[?]]]) + case schematic: SchematicTermLabel[?] => Expr(schematic)(using summon[ToExpr[SchematicTermLabel[?]]]) + } + } + given ToExpr[PredicateLabel[?]] with { + def apply(f: PredicateLabel[?])(using Quotes): Expr[PredicateLabel[?]] = f match { + case constant: ConstantPredicateLabel[?] => Expr(constant)(using summon[ToExpr[ConstantPredicateLabel[?]]]) + case schematic: SchematicPredicateLabel[?] => Expr(schematic)(using summon[ToExpr[SchematicPredicateLabel[?]]]) + } + } + given ToExpr[ConnectorLabel[?]] with { + def apply(f: ConnectorLabel[?])(using Quotes): Expr[ConnectorLabel[?]] = f match { + case constant: ConstantConnectorLabel[?] => + constant match { + case `neg` => '{ neg } + case `implies` => '{ implies } + case `iff` => '{ iff } + case `and` => '{ and } + case `or` => '{ or } + } + case schematic: SchematicConnectorLabel[?] => Expr(schematic)(using summon[ToExpr[SchematicConnectorLabel[?]]]) + } + } + + given ToExpr[Term] with { + def apply(t: Term)(using Quotes): Expr[Term] = t match { + case VariableTerm(label) => '{ VariableTerm(${ Expr(label) }) } + case Term(label, args) => '{ Term.unsafe(${ Expr(label) }, ${ liftSeq(args.map(Expr.apply(_))) }) } + } + } + given ToExpr[Formula] with { + def apply(f: Formula)(using Quotes): Expr[Formula] = f match { + case PredicateFormula(label, args) => '{ PredicateFormula.unsafe(${ Expr(label) }, ${ liftSeq(args.map(Expr.apply(_))) }) } + case ConnectorFormula(label, args) => '{ ConnectorFormula.unsafe(${ Expr(label) }, ${ liftSeq(args.map(Expr.apply(_))) }) } + case BinderFormula(label, bound, inner) => '{ BinderFormula(${ Expr(label) }, ${ Expr(bound) }, ${ Expr(inner) }) } + } + } + } + +} diff --git a/lisa-front/src/main/scala/lisa/front/parser/FrontParsed.scala b/lisa-front/src/main/scala/lisa/front/parser/FrontParsed.scala new file mode 100644 index 0000000000000000000000000000000000000000..02b33f037760fc537a8353b8259404381f2678c1 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/parser/FrontParsed.scala @@ -0,0 +1,68 @@ +package lisa.front.parser + +import scala.util.parsing.input.Position +import scala.util.parsing.input.Positional + +sealed abstract class FrontParsed extends Positional + +/** + * The intermediate representation for first order logic and sequents. + */ +private[parser] object FrontParsed { + + case class ParsedSequent(freeVariables: Seq[String], left: Seq[ParsedTermOrFormula], right: Seq[ParsedTermOrFormula]) extends FrontParsed + case class ParsedPartialSequent(freeVariables: Seq[String], left: Seq[ParsedTermOrFormula], right: Seq[ParsedTermOrFormula], partialLeft: Boolean, partialRight: Boolean) extends FrontParsed + + case class ParsedTopTermOrFormula(freeVariables: Seq[String], termOrFormula: ParsedTermOrFormula) extends FrontParsed + + sealed abstract class ParsedTermOrFormula extends FrontParsed + + sealed abstract class ParsedName extends ParsedTermOrFormula { + val identifier: String + } + case class ParsedConstant(identifier: String) extends ParsedName + case class ParsedSchema(identifier: String, connector: Boolean) extends ParsedName + + case class ParsedApplication(name: ParsedName, args: Seq[ParsedTermOrFormula]) extends ParsedTermOrFormula + + sealed abstract class ParsedBinaryOperator extends ParsedTermOrFormula { + val left: ParsedTermOrFormula + val right: ParsedTermOrFormula + } + case class ParsedAnd(left: ParsedTermOrFormula, right: ParsedTermOrFormula) extends ParsedBinaryOperator + case class ParsedOr(left: ParsedTermOrFormula, right: ParsedTermOrFormula) extends ParsedBinaryOperator + case class ParsedImplies(left: ParsedTermOrFormula, right: ParsedTermOrFormula) extends ParsedBinaryOperator + case class ParsedIff(left: ParsedTermOrFormula, right: ParsedTermOrFormula) extends ParsedBinaryOperator + + case class ParsedEqual(left: ParsedTermOrFormula, right: ParsedTermOrFormula) extends ParsedBinaryOperator + case class ParsedMembership(left: ParsedTermOrFormula, right: ParsedTermOrFormula) extends ParsedBinaryOperator + case class ParsedSubset(left: ParsedTermOrFormula, right: ParsedTermOrFormula) extends ParsedBinaryOperator + case class ParsedSameCardinality(left: ParsedTermOrFormula, right: ParsedTermOrFormula) extends ParsedBinaryOperator + + case class ParsedPower(termOrFormula: ParsedTermOrFormula) extends ParsedTermOrFormula + case class ParsedUnion(termOrFormula: ParsedTermOrFormula) extends ParsedTermOrFormula + + case class ParsedNot(termOrFormula: ParsedTermOrFormula) extends ParsedTermOrFormula + + sealed abstract class ParsedProduct extends ParsedTermOrFormula { + val left: ParsedTermOrFormula + val right: ParsedTermOrFormula + } + case class ParsedOrderedPair(left: ParsedTermOrFormula, right: ParsedTermOrFormula) extends ParsedProduct + case class ParsedSet2(left: ParsedTermOrFormula, right: ParsedTermOrFormula) extends ParsedProduct + // case class ParsedSet1(termOrFormula: ParsedTermOrFormula) extends ParsedTermOrFormula + case class ParsedSet0() extends ParsedTermOrFormula + + sealed abstract class ParsedBinder extends ParsedTermOrFormula { + val bound: Seq[String] + val termOrFormula: ParsedTermOrFormula + } + case class ParsedForall(bound: Seq[String], termOrFormula: ParsedTermOrFormula) extends ParsedBinder + case class ParsedExists(bound: Seq[String], termOrFormula: ParsedTermOrFormula) extends ParsedBinder + case class ParsedExistsOne(bound: Seq[String], termOrFormula: ParsedTermOrFormula) extends ParsedBinder + + case class ParsedProofStep(stepPosition: Position, indentation: Int, line: Int, ruleName: String, premises: Seq[Int], conclusion: ParsedSequent, parameters: Seq[ParsedTopTermOrFormula]) + extends FrontParsed + case class ParsedProof(steps: IndexedSeq[ParsedProofStep]) extends FrontParsed + +} diff --git a/lisa-front/src/main/scala/lisa/front/parser/FrontParser.scala b/lisa-front/src/main/scala/lisa/front/parser/FrontParser.scala new file mode 100644 index 0000000000000000000000000000000000000000..de77138568dca2432c79c07ad12b8f621481d08e --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/parser/FrontParser.scala @@ -0,0 +1,161 @@ +package lisa.front.parser + +import lisa.front.parser.FrontParsed +import lisa.front.parser.FrontParsed.* +import lisa.front.parser.FrontReadingException.ParsingException +import lisa.front.parser.FrontToken +import lisa.front.parser.FrontToken.* + +import scala.util.parsing.combinator.Parsers + +/** + * The parser convert low-level tokens into the [[FrontParsed]] intermediate representation. + */ +private[parser] object FrontParser extends Parsers { + + override type Elem = FrontToken + + private def identifier: Parser[Identifier] = + positioned(accept("identifier", { case id: Identifier => id })) + private def schematicIdentifier: Parser[SchematicIdentifier] = + positioned(accept("schematic identifier", { case id: SchematicIdentifier => id })) + private def schematicConnectorIdentifier: Parser[SchematicConnectorIdentifier] = + positioned(accept("schematic connector identifier", { case id: SchematicConnectorIdentifier => id })) + + private def integerLiteral: Parser[IntegerLiteral] = + positioned(accept("integer literal", { case lit: IntegerLiteral => lit })) + private def ruleName: Parser[RuleName] = + positioned(accept("rule", { case rule: RuleName => rule })) + + private def indentation: Parser[Indentation] = + positioned(accept("indentation", { case indentation: Indentation => indentation })) + private def newLine: Parser[NewLine] = + positioned(accept("new line", { case line: NewLine => line })) + + private def identifierOrSchematic: Parser[Identifier | SchematicIdentifier | SchematicConnectorIdentifier] = + positioned((identifier: Parser[Identifier | SchematicIdentifier | SchematicConnectorIdentifier]) | schematicIdentifier | schematicConnectorIdentifier) + + private def binder: Parser[ParsedTermOrFormula] = positioned( + (Forall() ^^^ ParsedForall.apply | Exists() ^^^ ParsedExists.apply | ExistsOne() ^^^ ParsedExistsOne.apply) ~ + rep1sep(identifier, Comma()) ~ Dot() ~ termOrFormula ^^ { case f ~ bs ~ _ ~ t => f(bs.map(_.identifier), t) } + ) + + private def termOrFormula: Parser[ParsedTermOrFormula] = positioned(termOrFormulaIff) + + private def termOrFormulaIff: Parser[ParsedTermOrFormula] = + positioned(termOrFormulaImplies ~ rep(Iff() ~> termOrFormulaImplies) ^^ { case h ~ t => (h +: t).reduceRight(ParsedIff.apply) }) + private def termOrFormulaImplies: Parser[ParsedTermOrFormula] = + positioned(termOrFormulaOr ~ rep(Implies() ~> termOrFormulaOr) ^^ { case h ~ t => (h +: t).reduceRight(ParsedImplies.apply) }) + private def termOrFormulaOr: Parser[ParsedTermOrFormula] = + positioned(termOrFormulaAnd ~ rep(Or() ~> termOrFormulaAnd) ^^ { case h ~ t => (h +: t).reduceRight(ParsedOr.apply) }) + private def termOrFormulaAnd: Parser[ParsedTermOrFormula] = + positioned(termOrFormulaPredicate ~ rep(And() ~> termOrFormulaPredicate) ^^ { case h ~ t => (h +: t).reduceRight(ParsedAnd.apply) }) + private def termOrFormulaPredicate: Parser[ParsedTermOrFormula] = + positioned( + termNotBinder ~ + rep( + (Membership() ^^^ ParsedMembership.apply | Subset() ^^^ ParsedSubset.apply | SameCardinality() ^^^ ParsedSameCardinality.apply | Equal() ^^^ ParsedEqual.apply) ~ + termNotBinder + ) ^^ { case t1 ~ ts => + ts.foldRight(t1) { case (f ~ tr, tl) => f(tl, tr) } + } + ) + + private def termNotBinder: Parser[ParsedTermOrFormula] = + positioned( + atom + | Not() ~> atom ^^ ParsedNot.apply + | binder + ) + + private def atom: Parser[ParsedTermOrFormula] = positioned( + (Identifier("P") ^^^ ParsedPower.apply | Identifier("U") ^^^ ParsedUnion.apply) ~ ParenthesisOpen() ~ termOrFormula ~ ParenthesisClose() ^^ { case f ~ _ ~ t ~ _ => + f(t) + } + | identifierOrSchematic ~ (ParenthesisOpen() ~> rep1sep(termOrFormula, Comma()) <~ ParenthesisClose()).? ^^ { case v ~ argsOpt => + val name = v match { + case Identifier(identifier) => ParsedConstant(identifier) + case SchematicIdentifier(identifier) => ParsedSchema(identifier, connector = false) + case SchematicConnectorIdentifier(identifier) => ParsedSchema(identifier, connector = true) + } + argsOpt.map(ParsedApplication(name, _)).getOrElse(name) + } + | ParenthesisOpen() ~ termOrFormula ~ (Comma() ~> termOrFormula <~ ParenthesisClose()).? ~ ParenthesisClose() ^^ { case _ ~ t1 ~ opt ~ _ => + opt match { + case Some(t2) => ParsedOrderedPair(t1, t2) + case None => t1 + } + } + | CurlyBracketOpen() ~> (termOrFormula ~ (Comma() ~> termOrFormula).?).? <~ CurlyBracketClose() ^^ { + case Some(t1 ~ opt2) => + opt2 match { + case Some(t2) => ParsedSet2(t1, t2) + case None => ParsedSet2(t1, t1) + } + case None => ParsedSet0() + } + | EmptySet() ^^^ ParsedSet0() + ) + + private def localBinder: Parser[Seq[String]] = + LocalBinder() ~> rep1sep(identifier, Comma()) <~ Dot() ^^ (fv => fv.map(_.identifier)) + private def localBinderOptional: Parser[Seq[String]] = localBinder.? ^^ (fv => fv.getOrElse(Seq.empty)) + + private def topTermOrFormula: Parser[ParsedTopTermOrFormula] = + localBinderOptional ~ termOrFormula ^^ { case fv ~ t => ParsedTopTermOrFormula(fv, t) } + + private def termOrFormulaSequence: Parser[Seq[ParsedTermOrFormula]] = + repsep(termOrFormula, Semicolon()) + + private def sequent: Parser[ParsedSequent] = + positioned(localBinderOptional ~ termOrFormulaSequence ~ Turnstile() ~ termOrFormulaSequence ^^ { case fv ~ l ~ _ ~ r => ParsedSequent(fv, l, r) }) + + private def partialSequent: Parser[ParsedPartialSequent] = + positioned( + localBinderOptional ~ ( + ((Ellipsis() ~> (Semicolon() ~> rep1sep(termOrFormula, Semicolon())).?) ^^ (opt => (opt.getOrElse(Seq.empty), true))) | + termOrFormulaSequence ^^ (seq => (seq, false)) + ) ~ Turnstile() ~ + ((Ellipsis() ^^^ Seq.empty | (rep1sep(termOrFormula, Semicolon()) <~ (Semicolon() ~ Ellipsis()))) ^^ (seq => (seq, true)) | + termOrFormulaSequence ^^ (seq => (seq, false))) ^^ { case fv ~ (l, pl) ~ _ ~ (r, pr) => + ParsedPartialSequent(fv, l, r, pl, pr) + } + ) + + private def proofStepParameters: Parser[Seq[ParsedTopTermOrFormula]] = + SquareBracketOpen() ~> repsep(topTermOrFormula, Semicolon()) <~ SquareBracketClose() ^^ (_.toSeq) + + private def proofStep: Parser[ParsedProofStep] = positioned( + indentation ~ integerLiteral ~ ruleName ~ repsep(integerLiteral, Comma()) ~ sequent ~ proofStepParameters.? ^^ { case i ~ l ~ r ~ p ~ s ~ ps => + ParsedProofStep(l.pos, i.spaces, l.value, r.name, p.map(_.value), s, ps.getOrElse(Seq.empty)) + } + ) + + private def proof: Parser[ParsedProof] = positioned( + (indentation ~ newLine).* ~> rep1sep(proofStep, newLine) <~ (newLine ~ indentation).* ^^ (steps => ParsedProof(steps.toIndexedSeq)) + ) + + private def parse[T](parser: Parser[T])(tokens: Seq[FrontToken]): T = { + val reader = new FrontTokensReader(tokens) + parser(reader) match { + case e @ NoSuccess(msg, next) => throw ParsingException(msg, next.pos) + case Success(result, next) => result + case e => throw new MatchError(e) + } + } + + def parseTermOrFormula(tokens: Seq[FrontToken]): ParsedTermOrFormula = + parse(positioned(termOrFormula <~ End()))(tokens) + + def parseTopTermOrFormula(tokens: Seq[FrontToken]): ParsedTopTermOrFormula = + parse(positioned(topTermOrFormula <~ End()))(tokens) + + def parseSequent(tokens: Seq[FrontToken]): ParsedSequent = + parse(positioned(sequent <~ End()))(tokens) + + def parsePartialSequent(tokens: Seq[FrontToken]): ParsedPartialSequent = + parse(positioned(partialSequent <~ End()))(tokens) + + def parseProof(tokens: Seq[FrontToken]): ParsedProof = + parse(positioned(proof <~ End()))(tokens) +} diff --git a/lisa-front/src/main/scala/lisa/front/parser/FrontReader.scala b/lisa-front/src/main/scala/lisa/front/parser/FrontReader.scala new file mode 100644 index 0000000000000000000000000000000000000000..36b2b375353856bd7b574327d9d5dc98ebee904d --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/parser/FrontReader.scala @@ -0,0 +1,40 @@ +package lisa.front.parser + +import lisa.front.fol.FOL.* +import lisa.front.parser.FrontResolver +import lisa.front.proof.Proof.* + +/** + * The reading API; parses string into first order logic or sequent elements. + * Reading exceptions can be found in [[FrontReadingException]]. + */ +object FrontReader { + + private def lexing(str: String, ascii: Boolean, multiline: Boolean): Seq[FrontToken] = { + val lexer = if (ascii) FrontLexer.lexingAscii else FrontLexer.lexingUnicode + lexer(str, !multiline, false) + } + + def readTerm(str: String, ascii: Boolean = true, toplevel: Boolean = true, multiline: Boolean = false): Term = { + val tokens = lexing(str, ascii, multiline) + if (toplevel) + FrontResolver.resolveTerm(FrontParser.parseTopTermOrFormula(tokens)) + else + FrontResolver.resolveTerm(FrontParser.parseTermOrFormula(tokens)) + } + + def readFormula(str: String, ascii: Boolean = true, toplevel: Boolean = true, multiline: Boolean = false): Formula = { + val tokens = lexing(str, ascii, multiline) + if (toplevel) + FrontResolver.resolveFormula(FrontParser.parseTopTermOrFormula(tokens)) + else + FrontResolver.resolveFormula(FrontParser.parseTermOrFormula(tokens)) + } + + def readSequent(str: String, ascii: Boolean = true, multiline: Boolean = false): Sequent = + FrontResolver.resolveSequent(FrontParser.parseSequent(lexing(str, ascii, multiline))) + + def readPartialSequent(str: String, ascii: Boolean = true, multiline: Boolean = false): PartialSequent = + FrontResolver.resolvePartialSequent(FrontParser.parsePartialSequent(lexing(str, ascii, multiline))) + +} diff --git a/lisa-front/src/main/scala/lisa/front/parser/FrontReadingException.scala b/lisa-front/src/main/scala/lisa/front/parser/FrontReadingException.scala new file mode 100644 index 0000000000000000000000000000000000000000..c4bbade3b599abfa4281d4b66c44cae9aad60b18 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/parser/FrontReadingException.scala @@ -0,0 +1,21 @@ +package lisa.front.parser + +import scala.util.parsing.input.Position + +/** + * An exception that can occur during reading. + */ +sealed abstract class FrontReadingException extends Exception { + val message: String + val position: Position + + override def toString: String = s"[$position] failure: $message\n\n${position.longString}" +} + +object FrontReadingException { + + case class LexingException(message: String, position: Position) extends FrontReadingException + case class ParsingException(message: String, position: Position) extends FrontReadingException + case class ResolutionException(message: String, position: Position) extends FrontReadingException + +} diff --git a/lisa-front/src/main/scala/lisa/front/parser/FrontResolver.scala b/lisa-front/src/main/scala/lisa/front/parser/FrontResolver.scala new file mode 100644 index 0000000000000000000000000000000000000000..c655652478e1e6db4b97a89287727bd68e128b6e --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/parser/FrontResolver.scala @@ -0,0 +1,166 @@ +package lisa.front.parser + +import lisa.front.fol.FOL.* +import lisa.front.parser.FrontParsed.* +import lisa.front.parser.FrontReadingException.ResolutionException +import lisa.front.proof.Proof.* +import lisa.front.theory.SetTheory + +import scala.collection.View +import scala.util.Failure +import scala.util.Success +import scala.util.Try +import scala.util.parsing.input.Position +import scala.util.parsing.input.Positional + +/** + * Resolves the intermediate representation ([[FrontParsed]]) into actual first order logic or sequent elements. + */ +private[parser] object FrontResolver { + + // Free variables must appear in the context, otherwise they will be treated as + // nullary function terms + + case class ScopedContext(boundVariables: Set[String], freeVariables: Set[String]) { + def variables: Set[String] = boundVariables ++ freeVariables + } + + private def emptyScopedContext: ScopedContext = ScopedContext(Set.empty, Set.empty) + + private def resolveFunctionTermLabel(name: ParsedName, arity: Int): TermLabel[?] = name match { + case ParsedConstant(identifier) => ConstantFunctionLabel.unsafe(identifier, arity) + case ParsedSchema(identifier, connector) => + if (!connector) + SchematicTermLabel.unsafe(identifier, arity) + else + throw ResolutionException("Type error: expected term, got schematic connector formula", name.pos) + } + + private def resolvePredicateOrConnectorFormulaLabel(name: ParsedName, arity: Int): PredicateLabel[?] | SchematicConnectorLabel[?] = name match { + case ParsedConstant(identifier) => ConstantPredicateLabel.unsafe(identifier, arity) + case ParsedSchema(identifier, connector) => + if (connector) + SchematicConnectorLabel.unsafe(identifier, arity) + else + SchematicPredicateLabel.unsafe(identifier, arity) + } + + private def resolveTermContext(tree: ParsedTermOrFormula)(implicit ctx: ScopedContext): Term = tree match { + case name: ParsedName => + name match { + case ParsedConstant(identifier) => + // If the name in context then it must be a variable. Otherwise we fallback to a constant function + if (ctx.variables.contains(identifier)) { + VariableTerm(VariableLabel(identifier)) + } else { + ConstantFunctionLabel[0](identifier) + } + case ParsedSchema(identifier, connector) => + if (!connector) { + SchematicTermLabel[0](identifier) + } else { + throw ResolutionException("Type error: expected term, got schematic connector formula", tree.pos) + } + } + // If the name is in the context, we decide that it is a variable + + case ParsedApplication(name, args) => + Term.unsafe(resolveFunctionTermLabel(name, args.size), args.map(resolveTermContext(_))) + case ParsedOrderedPair(left, right) => + ConstantFunctionLabel[2]("ordered_pair")(resolveTermContext(left), resolveTermContext(right)) + case ParsedSet2(left, right) => + SetTheory.unorderedPairSet(resolveTermContext(left), resolveTermContext(right)) + // case ParsedSet1(subtree) => + // SetTheory.singletonSet(resolveTermContext(subtree)) + case ParsedSet0() => + SetTheory.emptySet + case ParsedPower(subtree) => + SetTheory.powerSet(resolveTermContext(subtree)) + case ParsedUnion(subtree) => + SetTheory.unionSet(resolveTermContext(subtree)) + case _ => throw ResolutionException("Type error: expected term, got formula", tree.pos) + } + + private def resolveFormulaContext(tree: ParsedTermOrFormula)(implicit ctx: ScopedContext): Formula = tree match { + case name: ParsedName => + resolvePredicateOrConnectorFormulaLabel(name, 0) match { + case predicate: PredicateLabel[?] => PredicateFormula.unsafe(predicate, Seq.empty) + case connector: SchematicConnectorLabel[?] => + throw ResolutionException("Illegal: the arity of schematic connectors must be strictly positive", tree.pos) + } + case ParsedApplication(name, args) => + resolvePredicateOrConnectorFormulaLabel(name, args.size) match { + case predicate: PredicateLabel[?] => PredicateFormula.unsafe(predicate, args.map(resolveTermContext(_))) + case connector: SchematicConnectorLabel[?] => ConnectorFormula.unsafe(connector, args.map(resolveFormulaContext(_))) + } + case operator: ParsedBinaryOperator => + val label: Either[PredicateLabel[?], ConnectorLabel[?]] = operator match { + case _: ParsedEqual => Left(equality) + case _: ParsedMembership => Left(ConstantPredicateLabel[2]("set_membership")) + case _: ParsedSubset => Left(ConstantPredicateLabel[2]("subset_of")) + case _: ParsedSameCardinality => Left(ConstantPredicateLabel[2]("same_cardinality")) + case _: ParsedAnd => Right(and) + case _: ParsedOr => Right(or) + case _: ParsedImplies => Right(implies) + case _: ParsedIff => Right(iff) + } + val args = Seq(operator.left, operator.right) + label match { + case Left(label) => PredicateFormula.unsafe(label, args.map(resolveTermContext(_))) + case Right(label) => ConnectorFormula.unsafe(label, args.map(resolveFormulaContext(_))) + } + case ParsedNot(termOrFormula) => + ConnectorFormula.unsafe(neg, Seq(resolveFormulaContext(termOrFormula))) + case binder: ParsedBinder => + binder.bound.find(ctx.variables.contains).orElse(binder.bound.diff(binder.bound.distinct).headOption) match { + case Some(bound) => throw ResolutionException(s"Name conflict: ${binder.bound}", binder.pos) + case None => () + } + val label = binder match { + case _: ParsedForall => forall + case _: ParsedExists => exists + case _: ParsedExistsOne => existsOne + } + binder.bound.foldRight(resolveFormulaContext(binder.termOrFormula)(ctx.copy(boundVariables = ctx.boundVariables ++ binder.bound)))((bound, body) => + BinderFormula(label, VariableLabel(bound), body) + ) + case _ => throw ResolutionException("Type error: expected formula, got term", tree.pos) + } + + def resolveTerm(tree: ParsedTermOrFormula): Term = + resolveTermContext(tree)(emptyScopedContext) + + def resolveTerm(tree: ParsedTopTermOrFormula): Term = + resolveTermContext(tree.termOrFormula)(freeVariablesToContext(tree.freeVariables, tree.pos)) + + def resolveFormula(tree: ParsedTermOrFormula): Formula = + resolveFormulaContext(tree)(emptyScopedContext) + + private def freeVariablesToContext(freeVariables: Seq[String], position: Position): ScopedContext = { + val repeated = freeVariables.diff(freeVariables.distinct).distinct + if (repeated.isEmpty) { + ScopedContext(Set.empty, freeVariables.toSet) + } else { + throw ResolutionException(s"Repeated free variable declaration: ${repeated.mkString(", ")}", position) + } + } + + def resolveFormula(tree: ParsedTopTermOrFormula): Formula = + resolveFormulaContext(tree.termOrFormula)(freeVariablesToContext(tree.freeVariables, tree.pos)) + + def resolveSequent(tree: ParsedSequent): Sequent = { + val ctx = freeVariablesToContext(tree.freeVariables, tree.pos) + Sequent(tree.left.map(resolveFormulaContext(_)(ctx)).toIndexedSeq, tree.right.map(resolveFormulaContext(_)(ctx)).toIndexedSeq) + } + + def resolvePartialSequent(tree: ParsedPartialSequent): PartialSequent = { + val ctx = freeVariablesToContext(tree.freeVariables, tree.pos) + PartialSequent( + tree.left.map(resolveFormulaContext(_)(ctx)).toIndexedSeq, + tree.right.map(resolveFormulaContext(_)(ctx)).toIndexedSeq, + tree.partialLeft, + tree.partialRight + ) + } + +} diff --git a/lisa-front/src/main/scala/lisa/front/parser/FrontSymbols.scala b/lisa-front/src/main/scala/lisa/front/parser/FrontSymbols.scala new file mode 100644 index 0000000000000000000000000000000000000000..644a2d2a273a51cd5234256939dc546272bb3fc8 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/parser/FrontSymbols.scala @@ -0,0 +1,94 @@ +package lisa.front.parser + +/** + * Symbols to be used by the parser and printer. + * There exists two variants, [[FrontSymbols.FrontAsciiSymbols]] and [[FrontSymbols.FrontUnicodeSymbols]]. + */ +private[front] sealed abstract class FrontSymbols { + val Forall: String + val Exists: String + val ExistsOne: String + val Iff: String + val Implies: String + val Or: String + val And: String + val Exclamation: String + val Turnstile: String + val Ellipsis: String + val Subset: String + val Membership: String + val EmptySet: String + val Top: String = "⊤" + val Bot: String = "⊥" + val Equal: String = "=" + val Tilde: String = "~" + val Backslash: String = "\\" + val CurlyBracketOpen: String = "{" + val CurlyBracketClose: String = "}" + val SquareBracketOpen: String = "[" + val SquareBracketClose: String = "]" + val ParenthesisOpen: String = "(" + val ParenthesisClose: String = ")" + val Dot: String = "." + val Comma: String = "," + val Semicolon: String = ";" + val QuestionMark: String = "?" + val PowerSet: String = "P" + val UnionSet: String = "U" +} + +private[front] object FrontSymbols { + object FrontAsciiSymbols extends FrontSymbols { + override val Forall: String = "forall" + override val Exists: String = "exists" + override val ExistsOne: String = "existsone" + override val Iff: String = "<=>" + override val Implies: String = "=>" + override val Or: String = raw"\/" + override val And: String = """/\""" + override val Exclamation: String = "!" + override val Turnstile: String = "|-" + override val Ellipsis: String = "..." + override val Membership: String = "in" + override val Subset: String = "sub" + override val EmptySet: String = "{}" + } + + object FrontUnicodeSymbols extends FrontSymbols { + override val Forall: String = "∀" + override val Exists: String = "∃" + override val ExistsOne: String = "∃!" + override val Iff: String = "↔" + override val Implies: String = "→" + override val Or: String = "∨" + override val And: String = "∧" + override val Exclamation: String = "¬" + override val Turnstile: String = "⊢" + override val Ellipsis: String = "…" + override val Membership: String = "∈" + override val Subset: String = "⊆" + override val EmptySet: String = "∅" + } + + object FrontLatexSymbols extends FrontSymbols { + override val Forall: String = raw"\forall" + override val Exists: String = raw"\exists" + override val ExistsOne: String = raw"\exists!" + override val Iff: String = raw"\Leftrightarrow" + override val Implies: String = raw"\Rightarrow" + override val Or: String = raw"\lor" + override val And: String = raw"\land" + override val Exclamation: String = raw"\neg" + override val Turnstile: String = raw"\vdash" + override val Ellipsis: String = raw"\ldots" + override val Membership: String = raw"\in" + override val Subset: String = raw"\subseteq" + override val EmptySet: String = raw"\varnothing" + override val Tilde: String = raw"\sim" + override val Backslash: String = raw"\setminus" + override val CurlyBracketOpen: String = raw"\{" + override val CurlyBracketClose: String = raw"\}" + override val PowerSet: String = raw"\mathcal{P}" + override val UnionSet: String = raw"\mathcal{U}" + } +} diff --git a/lisa-front/src/main/scala/lisa/front/parser/FrontToken.scala b/lisa-front/src/main/scala/lisa/front/parser/FrontToken.scala new file mode 100644 index 0000000000000000000000000000000000000000..5478031c7d20664032761d90cfc920c675938911 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/parser/FrontToken.scala @@ -0,0 +1,59 @@ +package lisa.front.parser + +import scala.util.parsing.input.Positional + +/** + * Low-level tokens used by the lexer ([[FrontLexer]]). + */ +private[parser] enum FrontToken extends Positional { + + case Identifier(identifier: String) + case SchematicIdentifier(identifier: String) + case SchematicConnectorIdentifier(identifier: String) + + case IntegerLiteral(value: Int) + + case Indentation(spaces: Int) + + case NewLineWithIndentation(spaces: Int) + case InitialIndentation(spaces: Int) + + // The reason these *must* be case classes is because they extend `Positional`, + // which contains a position attribute (that shouldn't be shared between instances) + + case Turnstile() + case And() + case Or() + case Implies() + case Iff() + case Equal() + case Membership() + case Subset() + case SameCardinality() + case Forall() + case Exists() + case ExistsOne() + case Not() + case EmptySet() + case LocalBinder() + + case Ellipsis() + + case CurlyBracketOpen() + case CurlyBracketClose() + case SquareBracketOpen() + case SquareBracketClose() + case ParenthesisOpen() + case ParenthesisClose() + + case Dot() + case Comma() + case Semicolon() + + case RuleName(name: String) + + case NewLine() + + case End() + +} diff --git a/lisa-front/src/main/scala/lisa/front/parser/FrontTokensReader.scala b/lisa-front/src/main/scala/lisa/front/parser/FrontTokensReader.scala new file mode 100644 index 0000000000000000000000000000000000000000..08e143cff0474d09529843b3d4be51e5a523d439 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/parser/FrontTokensReader.scala @@ -0,0 +1,12 @@ +package lisa.front.parser + +import scala.util.parsing.input.NoPosition +import scala.util.parsing.input.Position +import scala.util.parsing.input.Reader + +private[parser] class FrontTokensReader(tokens: Seq[FrontToken]) extends Reader[FrontToken] { + override def first: FrontToken = tokens.head + override def atEnd: Boolean = tokens.isEmpty + override def pos: Position = tokens.headOption.map(_.pos).getOrElse(NoPosition) + override def rest: Reader[FrontToken] = new FrontTokensReader(tokens.tail) +} diff --git a/lisa-front/src/main/scala/lisa/front/parser/KernelReader.scala b/lisa-front/src/main/scala/lisa/front/parser/KernelReader.scala new file mode 100644 index 0000000000000000000000000000000000000000..95bfc801260f14f07f23d5b844c675b5ea86caeb --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/parser/KernelReader.scala @@ -0,0 +1,10 @@ +package lisa.front.parser + +import lisa.kernel.proof.SCProof + +object KernelReader { + + def readProof(str: String): SCProof = + KernelResolver.resolveProof(FrontParser.parseProof(FrontLexer.lexingExtendedUnicode(str)), FrontSymbols.FrontUnicodeSymbols) + +} diff --git a/lisa-front/src/main/scala/lisa/front/parser/KernelResolver.scala b/lisa-front/src/main/scala/lisa/front/parser/KernelResolver.scala new file mode 100644 index 0000000000000000000000000000000000000000..3179d0a0d67ca9b546b7ccb66641d0c1d4300192 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/parser/KernelResolver.scala @@ -0,0 +1,299 @@ +package lisa.front.parser + +import lisa.front.parser.FrontParsed.* +import lisa.front.parser.FrontReadingException.ResolutionException +import lisa.front.parser.FrontResolver.* +import lisa.kernel.fol.FOL.* +import lisa.kernel.proof.SCProof +import lisa.kernel.proof.SequentCalculus.* + +import scala.util.Failure +import scala.util.Success +import scala.util.Try +import scala.util.parsing.input.Position + +private[parser] object KernelResolver { + + private def tryResolve[T](t: => T): Option[T] = Try(t) match { + case Success(value) => Some(value) + case Failure(exception) => + exception match { + case _: ResolutionException => None + case _ => throw new Exception(exception) + } + } + private def all[T](t: => Seq[Option[T]]): Option[Seq[T]] = if (t.forall(_.nonEmpty)) Some(t.flatten) else None + + private object Formula { + def unapply(tree: ParsedTopTermOrFormula): Option[Formula] = tryResolve(resolveFormula(tree)) + } + private object FormulaSeq { + def unapplySeq(trees: Seq[ParsedTopTermOrFormula]): Option[Seq[Formula]] = all(trees.map(tree => tryResolve(resolveFormula(tree)))) + } + private object Term { + def unapply(tree: ParsedTopTermOrFormula): Option[Term] = tryResolve(resolveTerm(tree)) + } + private object TermSeq { + def unapplySeq(trees: Seq[ParsedTopTermOrFormula]): Option[Seq[Term]] = all(trees.map(tree => tryResolve(resolveTerm(tree)))) + } + private object Identifier { + def unapply(tree: ParsedTopTermOrFormula): Option[String] = tree.termOrFormula match { + case ParsedConstant(identifier) => Some(identifier) + case _ => None + } + } + + private object FunctionMap { + def unapplySeq(trees: Seq[ParsedTopTermOrFormula]): Option[Seq[(SchematicTermLabel, LambdaTermTerm)]] = { + all(trees.map { + case ParsedTopTermOrFormula(freeVariables, ParsedEqual(left, term)) => + val opt = left match { + case ParsedSchema(identifier, false) => Some((identifier, Seq.empty)) + case ParsedApplication(ParsedSchema(identifier, false), args) => Some((identifier, args)) + case _ => None + } + opt.flatMap { case (identifier, args) => + all(args.map { + case ParsedSchema(arg, false) => Some(VariableLabel(arg)) + case _ => None + }).flatMap(arguments => + Term + .unapply(ParsedTopTermOrFormula(freeVariables, term)) + .map(body => SchematicFunctionLabel(identifier, args.size) -> LambdaTermTerm(arguments, body)) + ) + } + case _ => None + }) + } + } + private object PredicateMap { + def unapplySeq(trees: Seq[ParsedTopTermOrFormula]): Option[Seq[(SchematicVarOrPredLabel, LambdaTermFormula)]] = { + all(trees.map { + case ParsedTopTermOrFormula(freeVariables, ParsedIff(left, term)) => + val opt = left match { + case ParsedSchema(identifier, true) => Some((identifier, Seq.empty)) + case ParsedApplication(ParsedSchema(identifier, true), args) => Some((identifier, args)) + case _ => None + } + opt.flatMap { case (identifier, args) => + all(args.map { + case ParsedSchema(arg, false) => Some(VariableLabel(arg)) + case _ => None + }).flatMap(arguments => + Formula + .unapply(ParsedTopTermOrFormula(freeVariables, term)) + .map(body => SchematicPredicateLabel(identifier, args.size) -> LambdaTermFormula(arguments, body)) + ) + } + case _ => None + }) + } + } + + private def resolveProofStep(tree: ParsedProofStep, kernelRuleIdentifiers: KernelRuleIdentifiers): SCProofStep = { + val R = kernelRuleIdentifiers + require(!Seq(R.Import, R.SubproofShown, R.SubproofHidden).contains(tree.ruleName)) + + val placeholder = "_" + def throwInvalidStep(): Nothing = + throw ResolutionException(s"Incorrect premises and/or arguments for step '${tree.ruleName}'", tree.stepPosition) + + val bot = resolveSequent(tree.conclusion) + (tree.ruleName, tree.premises, tree.parameters) match { + case (R.Hypothesis, Seq(), Seq(Formula(phi))) => Hypothesis(bot, phi) + case (R.Cut, Seq(t1, t2), Seq(Formula(phi))) => Cut(bot, t1, t2, phi) + case (R.Rewrite, Seq(t1), Seq()) => Rewrite(bot, t1) + case (R.RewriteTrue, Seq(), Seq()) => RewriteTrue(bot) + case (R.Weakening, Seq(t1), Seq()) => Weakening(bot, t1) + case (R.LeftAnd, Seq(t1), Seq(Formula(phi), Formula(psi))) => LeftAnd(bot, t1, phi, psi) + case (R.RightAnd, premises, FormulaSeq(conjuncts*)) if conjuncts.size == premises.size => RightAnd(bot, premises, conjuncts) + case (R.LeftOr, premises, FormulaSeq(conjuncts*)) if conjuncts.size == premises.size => LeftOr(bot, premises, conjuncts) + case (R.RightOr, Seq(t1), Seq(Formula(phi), Formula(psi))) => RightOr(bot, t1, phi, psi) + case (R.LeftImplies, Seq(t1, t2), Seq(Formula(phi), Formula(psi))) => LeftImplies(bot, t1, t2, phi, psi) + case (R.RightImplies, Seq(t1), Seq(Formula(phi), Formula(psi))) => RightImplies(bot, t1, phi, psi) + case (R.LeftIff, Seq(t1), Seq(Formula(phi), Formula(psi))) => LeftIff(bot, t1, phi, psi) + case (R.RightIff, Seq(t1, t2), Seq(Formula(phi), Formula(psi))) => RightIff(bot, t1, t2, phi, psi) + case (R.LeftNot, Seq(t1), Seq(Formula(phi))) => LeftNot(bot, t1, phi) + case (R.RightNot, Seq(t1), Seq(Formula(phi))) => RightNot(bot, t1, phi) + case (R.LeftForall, Seq(t1), Seq(Formula(phi), Identifier(x), Term(t))) => LeftForall(bot, t1, phi, VariableLabel(x), t) + case (R.RightForall, Seq(t1), Seq(Formula(phi), Identifier(x))) => RightForall(bot, t1, phi, VariableLabel(x)) + case (R.LeftExists, Seq(t1), Seq(Formula(phi), Identifier(x))) => LeftExists(bot, t1, phi, VariableLabel(x)) + case (R.RightExists, Seq(t1), Seq(Formula(phi), Identifier(x), Term(t))) => RightExists(bot, t1, phi, VariableLabel(x), t) + case (R.LeftExistsOne, Seq(t1), Seq(Formula(phi), Identifier(x))) => LeftExistsOne(bot, t1, phi, VariableLabel(x)) + case (R.RightExistsOne, Seq(t1), Seq(Formula(phi), Identifier(x))) => RightExistsOne(bot, t1, phi, VariableLabel(x)) + case (R.LeftRefl, Seq(t1), Seq(Formula(fa))) => LeftRefl(bot, t1, fa) + case (R.RightRefl, Seq(), Seq(Formula(fa))) => RightRefl(bot, fa) + case (R.LeftSubstEq, Seq(t1), parameters) if parameters.size % 2 == 1 => + (parameters.init, parameters.last) match { + case (TermSeq(terms*), Formula(ConnectorFormula(Iff, Seq(PredicateFormula(ConstantPredicateLabel(`placeholder`, _), args), body)))) => + all(args.map { + case VariableTerm(label) => Some(label) + case _ => None + }) match { + case Some(vars) => LeftSubstEq(bot, t1, terms.grouped(2).collect { case Seq(a, b) => (a, b) }.toList, LambdaTermFormula(vars, body)) + case None => throwInvalidStep() + } + case _ => throwInvalidStep() + } + case (R.RightSubstEq, Seq(t1), parameters) if parameters.size % 2 == 1 => + (parameters.init, parameters.last) match { + case (TermSeq(terms*), Formula(ConnectorFormula(Iff, Seq(PredicateFormula(ConstantPredicateLabel(`placeholder`, _), args), body)))) => + all(args.map { + case VariableTerm(label) => Some(label) + case _ => None + }) match { + case Some(vars) => RightSubstEq(bot, t1, terms.grouped(2).collect { case Seq(a, b) => (a, b) }.toList, LambdaTermFormula(vars, body)) + case None => throwInvalidStep() + } + case _ => throwInvalidStep() + } + case (R.LeftSubstIff, Seq(t1), parameters) if parameters.size % 2 == 1 => + (parameters.init, parameters.last) match { + case (FormulaSeq(formulas*), Formula(ConnectorFormula(Iff, Seq(PredicateFormula(ConstantPredicateLabel(`placeholder`, _), args), body)))) => + all(args.map { + case VariableTerm(label) => Some(VariableFormulaLabel(label.id)) + case _ => None + }) match { + case Some(vars) => LeftSubstIff(bot, t1, formulas.grouped(2).collect { case Seq(a, b) => (a, b) }.toList, LambdaFormulaFormula(vars, body)) + case None => throwInvalidStep() + } + case _ => throwInvalidStep() + } + case (R.RightSubstIff, Seq(t1), parameters) if parameters.size % 2 == 1 => + (parameters.init, parameters.last) match { + case (FormulaSeq(formulas*), Formula(ConnectorFormula(Iff, Seq(PredicateFormula(ConstantPredicateLabel(`placeholder`, _), args), body)))) => + all(args.map { + case VariableTerm(label) => Some(VariableFormulaLabel(label.id)) + case _ => None + }) match { + case Some(vars) => RightSubstIff(bot, t1, formulas.grouped(2).collect { case Seq(a, b) => (a, b) }.toList, LambdaFormulaFormula(vars, body)) + case None => throwInvalidStep() + } + case _ => throwInvalidStep() + } + case (R.FunInstantiation, Seq(t1), FunctionMap(map*)) => InstFunSchema(bot, t1, map.toMap) + case (R.PredInstantiation, Seq(t1), PredicateMap(map*)) => InstPredSchema(bot, t1, map.toMap) + case _ => throwInvalidStep() + } + } + + def resolveProof(tree: ParsedProof, symbols: FrontSymbols): SCProof = { + val R = KernelRuleIdentifiers(symbols) + + case class StackEntry(steps: IndexedSeq[SCProofStep], imports: IndexedSeq[Sequent], subproofPremises: Seq[Int], nextLineNumber: Int, indentation: Int) + + def foldSubproofs(stack: Seq[StackEntry], position: Position): Option[SCSubproof] = { + stack.foldLeft(None: Option[SCSubproof]) { (subproofOpt, entry) => + val premises = entry.subproofPremises + val newSteps = subproofOpt match { + case Some(proof) => entry.steps :+ proof.copy(premises = premises) + case None => entry.steps + } + if (newSteps.isEmpty) { + throw ResolutionException("This proof or subproof is incomplete", position) + } + Some(SCSubproof(SCProof(newSteps, entry.imports))) + } + } + + val (finalStack, finalExpectedDeeperIndentation) = tree.steps.foldLeft( + (Seq.empty[StackEntry], true) + ) { case ((stack, expectedDeeperIndentation), parsedStep) => + // The true indentation of the current step + val rightIndentation = parsedStep.indentation + parsedStep.line.toString.length + + val newStack = + if (expectedDeeperIndentation) { // Either the first line of a subproof or the first line of the proof + stack match { + case entry +: _ => // First step inside a subproof + if (rightIndentation <= entry.indentation) { + throw ResolutionException("The content of this subproof must be indented further", parsedStep.stepPosition) + } + val importsSize = entry.subproofPremises.size + if (-parsedStep.line != importsSize) { + throw ResolutionException(s"The parent subproof declared $importsSize premise(s), therefore this line must start with index ${-importsSize}", parsedStep.stepPosition) + } + case _ => // Very first line of the proof + () + } + if (parsedStep.line > 0) { + throw ResolutionException(s"The index of the first proof step cannot be strictly positive", parsedStep.stepPosition) + } + val entry = StackEntry(IndexedSeq.empty, IndexedSeq.empty, Seq.empty, parsedStep.line, rightIndentation) + entry +: stack + } else { // A line at the same level or lower + assert(stack.nonEmpty) // Invariant + + val indentationIndex = stack.zipWithIndex.find { case (entry, _) => entry.indentation == rightIndentation }.map(_._2) + indentationIndex match { + case Some(delta) => + val previousEntry: StackEntry = stack(delta) + previousEntry.copy(steps = previousEntry.steps ++ foldSubproofs(stack.take(delta), parsedStep.stepPosition).map(sp => sp.copy(premises = previousEntry.subproofPremises)).toSeq) +: stack + .drop(delta + 1) + case None => + throw ResolutionException("This step is not properly indented", parsedStep.stepPosition) + } + } + + assert(newStack.nonEmpty) // Invariant + + val entry = newStack.head + val tail = newStack.tail + + if (parsedStep.line != entry.nextLineNumber) { + throw ResolutionException(s"Expected line to be numbered ${entry.nextLineNumber}, but got ${parsedStep.line} instead", parsedStep.stepPosition) + } + + val isImport = parsedStep.ruleName == R.Import + val isSubproof = parsedStep.ruleName == R.SubproofShown // Hidden is excluded from this + + if (parsedStep.ruleName == R.SubproofHidden) { + throw ResolutionException("Cannot parse a hidden subproof", parsedStep.stepPosition) + } + + if (parsedStep.line < 0) { + if (!isImport) { + throw ResolutionException("Negative step indices must necessarily be import statements", parsedStep.stepPosition) + } + } else { + if (isImport) { + throw ResolutionException("Import statements can only appear on negative indices steps", parsedStep.stepPosition) + } + } + + if (isImport && parsedStep.premises.nonEmpty) { + throw ResolutionException("Import statements cannot have premises", parsedStep.stepPosition) + } + + if (!isImport) { + parsedStep.premises.foreach { premise => + if ((premise < 0 && -premise > entry.imports.size) || (premise >= 0 && premise >= entry.steps.size)) { + throw ResolutionException(s"Premise $premise is out of bounds", parsedStep.stepPosition) + } + } + } + + val sequent = resolveSequent(parsedStep.conclusion) + + val newEntry = entry.copy(nextLineNumber = entry.nextLineNumber + 1, subproofPremises = Seq.empty) + + if (isImport) { + (newEntry.copy(imports = sequent +: newEntry.imports) +: tail, false) + } else if (isSubproof) { + (newEntry.copy(subproofPremises = parsedStep.premises) +: tail, true) + } else { + val newStep = resolveProofStep(parsedStep, R) + (newEntry.copy(steps = newEntry.steps :+ newStep) +: tail, isSubproof) // FIXME spurious `isSubproof` (variable is always false here) + } + } + + val lastPosition = tree.steps.last.stepPosition + if (finalExpectedDeeperIndentation) { + throw ResolutionException("Empty trailing subproof", lastPosition) + } + + val finalStep = foldSubproofs(finalStack, lastPosition) + finalStep.get.sp + } + +} diff --git a/lisa-front/src/main/scala/lisa/front/parser/KernelRuleIdentifiers.scala b/lisa-front/src/main/scala/lisa/front/parser/KernelRuleIdentifiers.scala new file mode 100644 index 0000000000000000000000000000000000000000..4a66a9741c80fc588b966b85d1b4a8bab43405af --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/parser/KernelRuleIdentifiers.scala @@ -0,0 +1,92 @@ +package lisa.front.parser + +import lisa.kernel.proof.SequentCalculus.SCProofStep +import lisa.kernel.proof.SequentCalculus as SC + +private[front] case class KernelRuleIdentifiers(symbols: FrontSymbols) { + + private val isLatex: Boolean = symbols.isInstanceOf[FrontSymbols.FrontLatexSymbols.type] + + private val Left: String = "Left" + private val Right: String = "Right" + private val Subst: String = "Subst." + private val Refl: String = "Refl." + private val Instantiation: String = "Instantiation" + private val Subproof: String = "Subproof" + + private def symbol(s: String): String = if (isLatex) s"{$s}" else s + private def text(s: String): String = if (isLatex) raw"\text{$s}" else s + private def space: String = if (isLatex) "~" else " " + private def left(s: String): String = s"${text(Left)}$space${symbol(s)}" + private def right(s: String): String = s"${text(Right)}$space${symbol(s)}" + private def leftSubst(s: String): String = s"${text(s"$Left $Subst")}$space${symbol(s)}" + private def rightSubst(s: String): String = s"${text(s"$Right $Subst")}$space${symbol(s)}" + + val Hypothesis: String = text("Hypo.") + val Cut: String = text("Cut") + val Rewrite: String = text("Rewrite") + val RewriteTrue: String = text("Rewrite " + symbols.Top) + val Weakening: String = text("Weakening") + val LeftAnd: String = left(symbols.And) + val RightAnd: String = right(symbols.And) + val LeftOr: String = left(symbols.Or) + val RightOr: String = right(symbols.Or) + val LeftImplies: String = left(symbols.Implies) + val RightImplies: String = right(symbols.Implies) + val LeftIff: String = left(symbols.Iff) + val RightIff: String = right(symbols.Iff) + val LeftNot: String = left(symbols.Exclamation) + val RightNot: String = right(symbols.Exclamation) + val LeftForall: String = left(symbols.Forall) + val RightForall: String = right(symbols.Forall) + val LeftExists: String = left(symbols.Exists) + val RightExists: String = right(symbols.Exists) + val LeftExistsOne: String = left(symbols.ExistsOne) + val RightExistsOne: String = right(symbols.ExistsOne) + val LeftRefl: String = left(Refl) + val RightRefl: String = right(Refl) + val LeftSubstEq: String = leftSubst(symbols.Equal) + val RightSubstEq: String = rightSubst(symbols.Equal) + val LeftSubstIff: String = leftSubst(symbols.Iff) + val RightSubstIff: String = rightSubst(symbols.Iff) + val FunInstantiation: String = text(s"?Fun. $Instantiation") + val PredInstantiation: String = text(s"?Pred. $Instantiation") + val SubproofShown: String = text(Subproof) + val SubproofHidden: String = text(s"$Subproof (hidden)") + val Import: String = text("Import") + + def identify(step: SCProofStep): String = step match { + case _: SC.Hypothesis => Hypothesis + case _: SC.Cut => Cut + case _: SC.Rewrite => Rewrite + case _: SC.RewriteTrue => RewriteTrue + case _: SC.Weakening => Weakening + case _: SC.LeftAnd => LeftAnd + case _: SC.RightAnd => RightAnd + case _: SC.LeftOr => LeftOr + case _: SC.RightOr => RightOr + case _: SC.LeftImplies => LeftImplies + case _: SC.RightImplies => RightImplies + case _: SC.LeftIff => LeftIff + case _: SC.RightIff => RightIff + case _: SC.LeftNot => LeftNot + case _: SC.RightNot => RightNot + case _: SC.LeftForall => LeftForall + case _: SC.RightForall => RightForall + case _: SC.LeftExists => LeftExists + case _: SC.RightExists => RightExists + case _: SC.LeftExistsOne => LeftExistsOne + case _: SC.RightExistsOne => RightExistsOne + case _: SC.LeftRefl => LeftRefl + case _: SC.RightRefl => RightRefl + case _: SC.LeftSubstEq => LeftSubstEq + case _: SC.RightSubstEq => RightSubstEq + case _: SC.LeftSubstIff => LeftSubstIff + case _: SC.RightSubstIff => RightSubstIff + case _: SC.InstFunSchema => FunInstantiation + case _: SC.InstPredSchema => PredInstantiation + case SC.SCSubproof(_, _, true) => SubproofShown + case SC.SCSubproof(_, _, false) => SubproofHidden + } + +} diff --git a/lisa-front/src/main/scala/lisa/front/printer/FrontPositionedPrinter.scala b/lisa-front/src/main/scala/lisa/front/printer/FrontPositionedPrinter.scala new file mode 100644 index 0000000000000000000000000000000000000000..6f013ad6c657b8f2ed729b6275db46a35d89f63c --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/printer/FrontPositionedPrinter.scala @@ -0,0 +1,331 @@ +package lisa.front.printer + +import lisa.front.fol.FOL.* +import lisa.front.parser.FrontSymbols +import lisa.front.printer.FrontPrintNode.* +import lisa.front.printer.FrontPrintParameters +import lisa.front.proof.Proof.* +import lisa.front.theory.SetTheory + +/** + * A set of methods to positioned-print kernel trees. + */ +object FrontPositionedPrinter { + + private val rName = "[a-zA-Z_][a-zA-Z0-9_]*".r + private def isNamePrintable(name: String): Boolean = rName.matches(name) + + private def isTermPrintableInternal(term: Term, variables: Set[String]): Boolean = term match { + case VariableTerm(label) => + assert(variables.contains(label.id)) // By assumption, thus `isNamePrintable` is true + true + case Term(label, args) => + val isLabelPrintable = label match { + case SchematicTermLabel(id, _) => !variables.contains(id) + case _ => true + } + isNamePrintable(label.id) && isLabelPrintable && args.forall(isTermPrintableInternal(_, variables)) + } + + private def isTermPrintable(term: Term, freeVariables: Set[VariableLabel]): Boolean = + freeVariables.map(_.id).forall(isNamePrintable) && isWellFormed(term) && isTermPrintableInternal(term, freeVariables.map(_.id)) + + private def isFormulaPrintableInternal(formula: Formula, variables: Set[String]): Boolean = formula match { + case PredicateFormula(label, args) => + (!label.isInstanceOf[SchematicLabelType] || isNamePrintable(label.id)) && args.forall(isTermPrintableInternal(_, variables)) + case ConnectorFormula(label, args) => + (!label.isInstanceOf[SchematicLabelType] || isNamePrintable(label.id)) && args.forall(isFormulaPrintableInternal(_, variables)) + case BinderFormula(label, bound, inner) => + isNamePrintable(bound.id) && !variables.contains(bound.id) && isFormulaPrintableInternal(inner, variables + bound.id) + } + + private def isFormulaPrintable(formula: Formula, freeVariables: Set[VariableLabel]): Boolean = + freeVariables.map(_.id).forall(isNamePrintable) && isWellFormed(formula) && isFormulaPrintableInternal(formula, freeVariables.map(_.id)) + + private def mkSep(items: FrontPrintNode*)(separator: FrontLeaf): IndexedSeq[FrontPrintNode] = { + val nodes = items match { + case head +: tail => + head +: tail.flatMap(Seq(separator, _)) + case other => other + } + nodes.toIndexedSeq + } + + private def spaceSeparator(using p: FrontPrintParameters): String = if (p.compact) "" else " " + private def commaSeparator(separator: String)(using FrontPrintParameters): String = s"$separator$spaceSeparator" + private def commaSeparator(using p: FrontPrintParameters): String = commaSeparator(p.s.Comma) + + private def prettyName(name: String)(using p: FrontPrintParameters): String = + if (p.symbols == FrontPrintStyle.Latex && p.compact) s"{$name}" else name + private def prettyLabel(label: LabelType, double: Boolean = false)(using p: FrontPrintParameters): String = { + val (result, mustWrap) = label match { + case _: SchematicLabelType => + val s = s"${if (double) p.s.QuestionMark else ""}${p.s.QuestionMark}${label.id}" + (s, true) + case _ => (label.id, false) + } + if ((mustWrap && p.symbols == FrontPrintStyle.Latex) || (p.symbols == FrontPrintStyle.Latex && p.compact)) s"{$result}" else result + } + + private def positionedParentheses(content: FrontPrintNode)(using p: FrontPrintParameters): IndexedSeq[FrontPrintNode] = + IndexedSeq(p.s.ParenthesisOpen, content, p.s.ParenthesisClose) + + private def positionedFunction(name: String, args: Seq[FrontBranch], dropParentheses: Boolean = true)(using p: FrontPrintParameters): FrontBranch = { + if (dropParentheses && args.isEmpty) + FrontBranch(name) + else + FrontBranch(FrontLeaf(s"$name${p.s.ParenthesisOpen}") +: mkSep(args*)(commaSeparator) :+ FrontLeaf(p.s.ParenthesisClose)) + } + + private def positionedInfix(operator: String, left: FrontPrintNode, right: FrontPrintNode)(using FrontPrintParameters): FrontBranch = + FrontBranch(mkSep(left, operator, right)(spaceSeparator)) + private def positionedInfix(operator: FrontPrintNode, left: IndexedSeq[FrontPrintNode], right: IndexedSeq[FrontPrintNode])(using FrontPrintParameters): FrontBranch = + FrontBranch(left ++ Seq(FrontLeaf(spaceSeparator)) ++ IndexedSeq(operator) ++ Seq(FrontLeaf(spaceSeparator)) ++ right) + + // Special symbols that aren't defined in this theory + private val (membership, subsetOf, sameCardinality) = ( + SetTheory.membership, + SetTheory.subset, + SetTheory.sameCardinality + ) + private val (emptySet, unorderedPair, orderedPair, powerSet, unionSet) = ( + SetTheory.emptySet, + SetTheory.unorderedPairSet, + ConstantFunctionLabel[2]("ordered_pair"), + // SetTheory.singletonSet, + SetTheory.powerSet, + SetTheory.unionSet + ) + private val nonAtomicPredicates = Set[PredicateLabel[?]](equality, membership, subsetOf, sameCardinality) // Predicates which require parentheses (for readability) + + private def positionedFormulaInternal(formula: Formula, isRightMost: Boolean)(using p: FrontPrintParameters): FrontBranch = formula match { + case PredicateFormula(label, args) => + label match { + case `equality` => + args match { + case Seq(l, r) => positionedInfix(p.s.Equal, positionedTermInternal(l), positionedTermInternal(r)) + case _ => throw new Error + } + case `membership` => + args match { + case Seq(l, r) => positionedInfix(p.s.Membership, positionedTermInternal(l), positionedTermInternal(r)) + case _ => throw new Error + } + case `subsetOf` => + args match { + case Seq(l, r) => positionedInfix(p.s.Subset, positionedTermInternal(l), positionedTermInternal(r)) + case _ => throw new Error + } + case `sameCardinality` => + args match { + case Seq(l, r) => positionedInfix(p.s.Tilde, positionedTermInternal(l), positionedTermInternal(r)) + case _ => throw new Error + } + case _ => + positionedFunction(prettyLabel(label), args.map(positionedTermInternal(_))) + } + case ConnectorFormula(label, args) => + (label, args) match { + case (`neg`, Seq(arg)) => + val isAtomic = arg match { + case PredicateFormula(label, _) => !nonAtomicPredicates.contains(label) + case ConnectorFormula(`neg`, _) => true + case _ => false + } + val bodyString = positionedFormulaInternal(arg, isRightMost) + val bodyParenthesized = if (isAtomic) IndexedSeq(bodyString) else positionedParentheses(bodyString) + FrontBranch(FrontLeaf(p.s.Exclamation) +: bodyParenthesized) + case (binary @ (`implies` | `iff` | `and` | `or`), Seq(l, r)) => + val precedences: Map[ConnectorLabel[?], Int] = Map( + and -> 1, + or -> 2, + implies -> 3, + iff -> 4 + ) + val symbols: Map[ConnectorLabel[?], String] = Map( + and -> p.s.And, + or -> p.s.Or, + implies -> p.s.Implies, + iff -> p.s.Iff + ) + val precedence = precedences(binary) + val isLeftParentheses = l match { + case _: BinderFormula => true + case PredicateFormula(leftLabel, _) => nonAtomicPredicates.contains(leftLabel) + case ConnectorFormula(leftLabel, _) => precedences.get(leftLabel).exists(_ >= precedence) + } + val isRightParentheses = r match { + case _: BinderFormula => !isRightMost + case PredicateFormula(leftLabel, _) => nonAtomicPredicates.contains(leftLabel) + case ConnectorFormula(rightLabel, _) => precedences.get(rightLabel).exists(_ > precedence) + } + val (leftString, rightString) = (positionedFormulaInternal(l, isLeftParentheses), positionedFormulaInternal(r, isRightMost || isRightParentheses)) + val leftParenthesized = if (isLeftParentheses) positionedParentheses(leftString) else IndexedSeq(leftString) + val rightParenthesized = if (isRightParentheses) positionedParentheses(rightString) else IndexedSeq(rightString) + positionedInfix(symbols(label), leftParenthesized, rightParenthesized) + case (nary @ (`and` | `or`), args) if args.nonEmpty => + // FIXME wrong indexing if we do that + // Rewriting to match the above case; namely op(a) --> a, and op(a, ...rest) --> op(a, op(...rest)) + // Empty args aren't allowed here + // Invariant: args.size > 2 + if (args.sizeIs == 1) { + positionedFormulaInternal(args.head, isRightMost) + } else { + positionedFormulaInternal(ConnectorFormula.unsafe(nary, Seq(args.head, ConnectorFormula.unsafe(nary, args.tail))), isRightMost) + } + case _ => positionedFunction(prettyLabel(label, double = true), args.map(a => positionedFormulaInternal(a, isRightMost))) + } + case BinderFormula(label, bound, inner) => + val symbols: Map[BinderLabel, String] = Map( + forall -> p.s.Forall, + exists -> p.s.Exists, + existsOne -> p.s.ExistsOne + ) + def accumulateNested(f: Formula, acc: Seq[VariableLabel]): (Seq[VariableLabel], Formula) = f match { + case BinderFormula(`label`, nestBound, nestInner) => accumulateNested(nestInner, nestBound +: acc) + case _ => (acc, f) + } + val (bounds, innerNested) = accumulateNested(inner, Seq(bound)) + + val innerTree = FrontBranch( + mkSep( + FrontLeaf(s"${symbols(label)}${if (p.symbols == FrontPrintStyle.Ascii || p.symbols == FrontPrintStyle.Latex) " " else ""}${bounds.reverse.map(_.id).mkString(commaSeparator)}${p.s.Dot}"), + positionedFormulaInternal(innerNested, true) + )(spaceSeparator) + ) + bounds.tail.foldLeft(innerTree)((acc, _) => FrontBranch(acc)) + } + + private def positionedExpression(freeVariables: Set[VariableLabel], expression: FrontBranch)(using p: FrontPrintParameters): FrontBranch = { + FrontBranch(expression.children) + } + + /** + * Returns a string representation of this formula. See also [[positionedTerm]]. + * Example output: + * <pre> + * ∀x, y. (∀z. (z ∈ x) ↔ (z ∈ y)) ↔ (x = y) + * </pre> + * @param formula the formula + * @param ascii whether it should be printed in ASCII or unicode + * @param compact whether spaces should be omitted between tokens + * @return the string representation of this formula + */ + def positionedFormula(formula: Formula, symbols: FrontPrintStyle = FrontPrintStyle.Unicode, compact: Boolean = false, strict: Boolean = false): FrontBranch = { + given FrontPrintParameters = FrontPrintParameters(symbols, compact) + val f = positionedFormulaInternal(formula, true) + val freeVariables = freeVariablesOf(formula) + if (strict) { + require(isFormulaPrintable(formula, freeVariables)) + } + positionedExpression(freeVariables, f) + } + + def prettyFormula(formula: Formula, symbols: FrontPrintStyle = FrontPrintStyle.Unicode, compact: Boolean = false): String = + positionedFormula(formula, symbols, compact).print + + private def positionedTermInternal(term: Term)(using p: FrontPrintParameters): FrontBranch = term match { + // case VariableTerm(label) => FrontBranch(prettyName(label.id)) + case Term(label, args) => + label match { + case `emptySet` => + args match { + case Seq() => positionedFunction(p.s.EmptySet, Seq.empty, dropParentheses = true) + case _ => throw new Error + } + case `unorderedPair` => + args match { + case Seq(l, r) => FrontBranch(p.s.CurlyBracketOpen, positionedTermInternal(l), commaSeparator, positionedTermInternal(r), p.s.CurlyBracketClose) + case _ => throw new Error + } + case `orderedPair` => + args match { + case Seq(l, r) => FrontBranch(p.s.ParenthesisOpen, positionedTermInternal(l), commaSeparator, positionedTermInternal(r), p.s.ParenthesisClose) + case _ => throw new Error + } + + case `powerSet` => + args match { + case Seq(s) => positionedFunction(p.s.PowerSet, Seq(positionedTermInternal(s))) + case _ => throw new Error + } + case `unionSet` => + args match { + case Seq(s) => positionedFunction(p.s.UnionSet, Seq(positionedTermInternal(s))) + case _ => throw new Error + } + case _ => + positionedFunction(prettyLabel(label), args.map(positionedTermInternal(_))) + } + } + + /** + * Returns a string representation of this term. See also [[positionedFormula]]. + * Example output: + * <pre> + * f({w, (x, y)}, z) + * </pre> + * @param term the term + * @param ascii whether it should be printed in ASCII or unicode + * @param compact whether spaces should be omitted between tokens + * @return the string representation of this term + */ + def positionedTerm(term: Term, symbols: FrontPrintStyle = FrontPrintStyle.Unicode, compact: Boolean = false, strict: Boolean = false): FrontBranch = { + if (strict) { + require(isTermPrintable(term, Set.empty)) // Trivially true + } + positionedTermInternal(term)(using FrontPrintParameters(symbols, compact)) + } + + def prettyTerm(term: Term, symbols: FrontPrintStyle = FrontPrintStyle.Unicode, compact: Boolean = false, strict: Boolean = false): String = + positionedTerm(term, symbols, compact, strict).print + + private def positionedSequentBase(sequent: SequentBase, symbols: FrontPrintStyle = FrontPrintStyle.Unicode, compact: Boolean = false, strict: Boolean = false): FrontBranch = { + given p: FrontPrintParameters = FrontPrintParameters(symbols, compact) + val (partialLeft, partialRight) = sequent match { + case _: Sequent => (false, false) + case PartialSequent(_, _, partialLeft, partialRight) => (partialLeft, partialRight) + } + def positionedEllipsis(display: Boolean): Seq[FrontPrintNode] = if (display) Seq(p.s.Ellipsis) else Seq.empty + def sortedFormulas(seq: IndexedSeq[Formula]): IndexedSeq[FrontPrintNode] = + seq.map(positionedFormulaInternal(_, true)).sortBy(_.print) + val (lhs, rhs) = ( + mkSep((positionedEllipsis(partialLeft) ++ sortedFormulas(sequent.left))*)(commaSeparator(p.s.Semicolon)), + mkSep((sortedFormulas(sequent.right) ++ positionedEllipsis(partialRight))*)(commaSeparator(p.s.Semicolon)) + ) + def spaceFor(seq: IndexedSeq[FrontPrintNode]): Seq[FrontPrintNode] = if (seq.nonEmpty) Seq(spaceSeparator) else Seq.empty + val expression = FrontBranch( + ( + lhs ++ spaceFor(lhs) ++ Seq(FrontLeaf(p.s.Turnstile)) ++ spaceFor(rhs) ++ rhs + )* + ) + val freeVariables = freeVariablesOfSequent(sequent) + if (strict) { + require(sequent.formulas.forall(isFormulaPrintable(_, freeVariables))) + } + positionedExpression(freeVariables, expression) + } + + /** + * Returns a string representation of this sequent. + * Example output: + * <pre> + * ⊢ ∀x, y. (∀z. (z ∈ x) ↔ (z ∈ y)) ↔ (x = y) + * </pre> + * @param sequent the sequent + * @param ascii whether it should be printed in ASCII or unicode + * @param compact whether spaces should be omitted between tokens + * @return the string representation of this sequent + */ + def positionedSequent(sequent: Sequent, symbols: FrontPrintStyle = FrontPrintStyle.Unicode, compact: Boolean = false, strict: Boolean = false): FrontBranch = + positionedSequentBase(sequent, symbols, compact, strict) + + def prettySequent(sequent: Sequent, symbols: FrontPrintStyle = FrontPrintStyle.Unicode, compact: Boolean = false, strict: Boolean = false): String = + positionedSequent(sequent, symbols, compact, strict).print + + def positionedPartialSequent(sequent: PartialSequent, symbols: FrontPrintStyle = FrontPrintStyle.Unicode, compact: Boolean = false, strict: Boolean = false): FrontBranch = + positionedSequentBase(sequent, symbols, compact, strict) + + def prettyPartialSequent(sequent: PartialSequent, symbols: FrontPrintStyle = FrontPrintStyle.Unicode, compact: Boolean = false, strict: Boolean = false): String = + positionedPartialSequent(sequent, symbols, compact, strict).print +} diff --git a/lisa-front/src/main/scala/lisa/front/printer/FrontPrintNode.scala b/lisa-front/src/main/scala/lisa/front/printer/FrontPrintNode.scala new file mode 100644 index 0000000000000000000000000000000000000000..5399cc201e97422c25672350503ca8ed94091140 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/printer/FrontPrintNode.scala @@ -0,0 +1,43 @@ +package lisa.front.printer + +/** + * Represents the result of printing expressed as a tree. + * This allows to extract positional information. + */ +sealed abstract class FrontPrintNode { + import FrontPrintNode.* + + def print: String = this match { + case FrontLeaf(value) => value + case FrontBranch(children) => children.map(_.print).mkString + } +} + +object FrontPrintNode { + case class FrontLeaf(value: String) extends FrontPrintNode + case class FrontBranch(children: IndexedSeq[FrontPrintNode]) extends FrontPrintNode { + def locate(pos: Seq[Int]): (Int, Int) = { + def locate(that: FrontBranch, pos: Seq[Int], start: Int): (Int, Int) = pos match { + case h +: t => + val (child, index) = that.children.view.zipWithIndex + .collect { case (branch: FrontBranch, i) => + (branch, i) + } + .drop(h) + .head + val newStart = start + that.children.take(index).map(_.print.length).sum + locate(child, t, newStart) + case _ => + val length = that.children.map(_.print.length).sum + (start, length) + } + locate(this, pos, 0) + } + } + + object FrontBranch { + def apply(children: FrontPrintNode*): FrontBranch = new FrontBranch(children.toIndexedSeq) + } + + given Conversion[String, FrontLeaf] = FrontLeaf.apply +} diff --git a/lisa-front/src/main/scala/lisa/front/printer/FrontPrintParameters.scala b/lisa-front/src/main/scala/lisa/front/printer/FrontPrintParameters.scala new file mode 100644 index 0000000000000000000000000000000000000000..b7823131ed9e228f9efbb7defa56dffbaeeabb67 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/printer/FrontPrintParameters.scala @@ -0,0 +1,18 @@ +package lisa.front.printer + +import lisa.front.parser.FrontSymbols + +private[printer] case class FrontPrintParameters(s: FrontSymbols, symbols: FrontPrintStyle, compact: Boolean) { + // export S.* +} + +private[printer] object FrontPrintParameters { + def apply(symbols: FrontPrintStyle, compact: Boolean): FrontPrintParameters = { + val s = symbols match { + case FrontPrintStyle.Ascii => FrontSymbols.FrontAsciiSymbols + case FrontPrintStyle.Unicode => FrontSymbols.FrontUnicodeSymbols + case FrontPrintStyle.Latex => FrontSymbols.FrontLatexSymbols + } + FrontPrintParameters(s, symbols, compact) + } +} diff --git a/lisa-front/src/main/scala/lisa/front/printer/FrontPrintStyle.scala b/lisa-front/src/main/scala/lisa/front/printer/FrontPrintStyle.scala new file mode 100644 index 0000000000000000000000000000000000000000..59f595adeaaf45af826418308e02f9c60eec8d42 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/printer/FrontPrintStyle.scala @@ -0,0 +1,7 @@ +package lisa.front.printer + +enum FrontPrintStyle { + case Ascii + case Unicode + case Latex +} diff --git a/lisa-front/src/main/scala/lisa/front/proof/Proof.scala b/lisa-front/src/main/scala/lisa/front/proof/Proof.scala new file mode 100644 index 0000000000000000000000000000000000000000..f1b18c2299a30de5bf355e6eeaab827ea06dd83f --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/proof/Proof.scala @@ -0,0 +1,28 @@ +package lisa.front.proof + +import lisa.front.printer.FrontPositionedPrinter +import lisa.front.proof.state.* + +/** + * The proof package. + */ +object Proof extends ProofInterfaceDefinitions with RuleDefinitions { + + override protected def pretty(sequent: Sequent): String = FrontPositionedPrinter.prettySequent(sequent) + override protected def pretty(sequent: PartialSequent): String = FrontPositionedPrinter.prettyPartialSequent(sequent) + + val fallback: TacticFallback.type = TacticFallback + val combine: TacticCombine.type = TacticCombine + + val justification: TacticApplyJustification.type = TacticApplyJustification + + extension (tactic: Tactic) { + infix def + : TacticRepeat = TacticRepeat(tactic) + infix def |(other: Tactic): TacticFallback = TacticFallback(Seq(tactic, other)) + infix def ~(other: Tactic): TacticCombine = tactic match { + case TacticCombine(tactics) => TacticCombine(tactics :+ other) + case _ => TacticCombine(Seq(tactic, other)) + } + } + +} diff --git a/lisa-front/src/main/scala/lisa/front/proof/sequent/SequentDefinitions.scala b/lisa-front/src/main/scala/lisa/front/proof/sequent/SequentDefinitions.scala new file mode 100644 index 0000000000000000000000000000000000000000..d18bc80a83e0a356e2319677d21227db9dca8688 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/proof/sequent/SequentDefinitions.scala @@ -0,0 +1,84 @@ +package lisa.front.proof.sequent + +import lisa.front.fol.FOL.* + +trait SequentDefinitions { + + protected def pretty(sequent: Sequent): String + protected def pretty(sequent: PartialSequent): String + + /** + * The base sequent object; used to represent both the concrete sequent and its partial counterpart. + */ + sealed abstract class SequentBase { + val left: IndexedSeq[Formula] + val right: IndexedSeq[Formula] + + def formulas: IndexedSeq[Formula] = left ++ right + } + + /** + * A sequent is a pair of indexable collections of formulas. + * @param left the left hand side of this sequent + * @param right the right hand side of this sequent + */ + final case class Sequent(left: IndexedSeq[Formula], right: IndexedSeq[Formula]) extends SequentBase { + override def toString: String = pretty(this) + } + + /** + * A partial sequent is a representation of a sequent where only a part of the formulas are known. + * @param left the left hand side of this partial sequent + * @param right the right hand side of this partial sequent + * @param partialLeft whether the left hand side is partial + * @param partialRight whether the right hand side is partial + */ + final case class PartialSequent(left: IndexedSeq[Formula], right: IndexedSeq[Formula], partialLeft: Boolean = true, partialRight: Boolean = true) extends SequentBase { + override def toString: String = pretty(this) + } + + def functionsOfSequent(sequent: SequentBase): Set[TermLabel[?]] = sequent.formulas.flatMap(termLabelsOf).toSet + + def predicatesOfSequent(sequent: SequentBase): Set[PredicateLabel[?]] = sequent.formulas.flatMap(predicatesOf).toSet + + def schematicFunctionsOfSequent(sequent: SequentBase): Set[SchematicTermLabel[?]] = + functionsOfSequent(sequent).collect { case l: SchematicTermLabel[?] => l } + + def schematicPredicatesOfSequent(sequent: SequentBase): Set[SchematicPredicateLabel[?]] = + predicatesOfSequent(sequent).collect { case l: SchematicPredicateLabel[?] => l } + + def schematicConnectorsOfSequent(sequent: SequentBase): Set[SchematicConnectorLabel[?]] = + sequent.formulas.flatMap(schematicConnectorsOf).toSet + + def freeVariablesOfSequent(sequent: SequentBase): Set[VariableLabel] = sequent.formulas.flatMap(freeVariablesOf).toSet + + def declaredBoundVariablesOfSequent(sequent: SequentBase): Set[VariableLabel] = + sequent.formulas.flatMap(declaredBoundVariablesOf).toSet + + def isSequentWellFormed(sequent: SequentBase): Boolean = + sequent.formulas.forall(isWellFormed) + + // Only full sequents should be converted to the kernel + def sequentToKernel(sequent: Sequent): lisa.kernel.proof.SequentCalculus.Sequent = + lisa.kernel.proof.SequentCalculus.Sequent( + sequent.left.map(toKernel).toSet, + sequent.right.map(toKernel).toSet + ) + + given Conversion[Sequent, lisa.kernel.proof.SequentCalculus.Sequent] = sequentToKernel + + def isSameSequent(s1: Sequent, s2: Sequent): Boolean = + lisa.kernel.proof.SequentCalculus.isSameSequent(s1, s2) + + def instantiateSequentSchemas( + sequent: Sequent, + functions: Seq[AssignedFunction] = Seq.empty, + predicates: Seq[AssignedPredicate] = Seq.empty, + connectors: Seq[AssignedConnector] = Seq.empty + ): Sequent = { + def instantiate(formulas: IndexedSeq[Formula]): IndexedSeq[Formula] = + formulas.map(instantiateFormulaSchemas(_, functions, predicates, connectors)) + Sequent(instantiate(sequent.left), instantiate(sequent.right)) + } + +} diff --git a/lisa-front/src/main/scala/lisa/front/proof/sequent/SequentOps.scala b/lisa-front/src/main/scala/lisa/front/proof/sequent/SequentOps.scala new file mode 100644 index 0000000000000000000000000000000000000000..c628700250849fa829dc0946ac9875529c174986 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/proof/sequent/SequentOps.scala @@ -0,0 +1,64 @@ +package lisa.front.proof.sequent + +import lisa.front.fol.FOL.* +import lisa.front.proof.sequent.SequentDefinitions + +trait SequentOps extends SequentDefinitions { + + protected trait IndexedSeqConverter[S, T] { + def apply(t: T): IndexedSeq[S] + } + + given [S]: IndexedSeqConverter[S, Unit] with { + override def apply(u: Unit): IndexedSeq[S] = IndexedSeq.empty + } + given [S]: IndexedSeqConverter[S, EmptyTuple] with { + override def apply(t: EmptyTuple): IndexedSeq[S] = IndexedSeq.empty + } + given [S, H <: S, T <: Tuple](using converter: IndexedSeqConverter[S, T]): IndexedSeqConverter[S, H *: T] with { + override def apply(t: H *: T): IndexedSeq[S] = t.head +: converter(t.tail) + } + given givenTupleValueConversion[S, H, T <: Tuple](using tupleConverter: IndexedSeqConverter[S, T], valueConverter: Conversion[H, S]): IndexedSeqConverter[S, H *: T] with { + override def apply(t: H *: T): IndexedSeq[S] = valueConverter(t.head) +: tupleConverter(t.tail) + } + given [S, T <: S]: IndexedSeqConverter[S, T] with { + override def apply(f: T): IndexedSeq[S] = IndexedSeq(f) + } + given givenValueConversion[S, T](using converter: Conversion[T, S]): IndexedSeqConverter[S, T] with { + override def apply(f: T): IndexedSeq[S] = IndexedSeq(f: S) + } + given [S, I <: Iterable[S]]: IndexedSeqConverter[S, I] with { + override def apply(s: I): IndexedSeq[S] = s.toIndexedSeq + } + + protected def any2seq[S, A, T <: A](any: T)(using converter: IndexedSeqConverter[S, T]): IndexedSeq[S] = converter(any) + + extension [T1](left: T1)(using IndexedSeqConverter[Formula, T1]) { + infix def |-[T2](right: T2)(using IndexedSeqConverter[Formula, T2]): Sequent = Sequent(any2seq(left), any2seq(right)) + } + + object |- { + def apply[T](right: T)(using IndexedSeqConverter[Formula, T]): Sequent = Sequent(IndexedSeq.empty, any2seq(right)) + infix def unapply(sequent: Sequent): Some[(IndexedSeq[Formula], IndexedSeq[Formula])] = + Some((sequent.left, sequent.right)) + } + + extension [T1](left: T1)(using IndexedSeqConverter[Formula, T1]) { + infix def ||-[T2](right: T2)(using IndexedSeqConverter[Formula, T2]): PartialSequent = PartialSequent(any2seq(left), any2seq(right)) + } + + object ||- { + def apply[T](right: T)(using IndexedSeqConverter[Formula, T]): PartialSequent = PartialSequent(IndexedSeq.empty, any2seq(right)) + infix def unapply(sequent: PartialSequent): Some[(IndexedSeq[Formula], IndexedSeq[Formula])] = + Some((sequent.left, sequent.right)) + } + + type KernelSequent = lisa.kernel.proof.SequentCalculus.Sequent + extension (s: KernelSequent) { + infix def +<(f: Formula): KernelSequent = s.copy(left = s.left + f) + infix def -<(f: Formula): KernelSequent = s.copy(left = s.left - f) + infix def +>(f: Formula): KernelSequent = s.copy(right = s.right + f) + infix def ->(f: Formula): KernelSequent = s.copy(right = s.right - f) + } + +} diff --git a/lisa-front/src/main/scala/lisa/front/proof/state/ProofEnvironmentDefinitions.scala b/lisa-front/src/main/scala/lisa/front/proof/state/ProofEnvironmentDefinitions.scala new file mode 100644 index 0000000000000000000000000000000000000000..62c774bf01bb97ed240926d8f23f962583a73351 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/proof/state/ProofEnvironmentDefinitions.scala @@ -0,0 +1,224 @@ +package lisa.front.proof.state + +import lisa.front.fol.FOL.* +import lisa.kernel.proof.RunningTheory +import lisa.kernel.proof.RunningTheoryJudgement.* +import lisa.kernel.proof.SCProof +import lisa.kernel.proof.SCProofChecker +import lisa.kernel.proof.SequentCalculus.SCSubproof +import lisa.kernel.proof.SequentCalculus.sequentToFormula +import lisa.utils.Printer +import lisa.utils.ProofsShrink + +trait ProofEnvironmentDefinitions extends ProofStateDefinitions { + + import scala.collection.mutable + + /** + * The proof environment represents a mutable context with axioms and theorems. + * It is analogous to the kernel's [[RunningTheory]], but adapted to the front and with additional safety guarantees and utilities. + * @param runningTheory the kernel's theory + */ + final class ProofEnvironment( + val runningTheory: RunningTheory // For now, doesn't need to be generically typed + ) extends ReadableProofEnvironment { + private[ProofEnvironmentDefinitions] val proven: mutable.Map[Sequent, Seq[(Justified, runningTheory.Justification)]] = mutable.Map.empty + + private def addOne(sequent: Sequent, justified: Justified, kernelJustification: runningTheory.Justification): Unit = { + if (!proven.contains(sequent)) { + proven.addOne(sequent, Seq.empty) + } + proven.addOne(sequent, proven(sequent) :+ (justified, kernelJustification)) + } + + // Lift the initial axioms + runningTheory.axiomsList.foreach { kernelAxiom => + val frontAxiom = Axiom(this, fromKernel(kernelAxiom.ax)) + addOne(frontAxiom.sequent, frontAxiom, kernelAxiom) + } + + override def contains(sequent: Sequent): Boolean = proven.contains(sequent) + + def belongsToTheory(label: ConstantFunctionLabel[?]): Boolean = runningTheory.isSymbol(toKernel(label)) + def belongsToTheory(label: ConstantPredicateLabel[?]): Boolean = runningTheory.isSymbol(toKernel(label)) + def belongsToTheory(term: Term): Boolean = + termLabelsOf(term).collect { case f: ConstantFunctionLabel[?] => f }.forall(belongsToTheory) + def belongsToTheory(formula: Formula): Boolean = + termLabelsOf(formula).collect { case f: ConstantFunctionLabel[?] => f }.forall(belongsToTheory) && + predicatesOf(formula).collect { case p: ConstantPredicateLabel[?] => p }.forall(belongsToTheory) + def belongsToTheory(sequent: SequentBase): Boolean = + sequent.left.forall(belongsToTheory) && sequent.right.forall(belongsToTheory) + + private def addSequentToEnvironment(sequent: Sequent, scProof: SCProof, justifiedImports: Map[Int, Sequent]): Theorem = { + require(scProof.imports.size == justifiedImports.size && scProof.imports.indices.forall(justifiedImports.contains), "All imports must be justified") + require(isAcceptedSequent(sequent)(this), "Invalid conclusion") + require( + lisa.kernel.proof.SequentCalculus.isSameSequent(sequentToKernel(sequent), scProof.conclusion), + "Error: the proof conclusion does not match the provided sequent" + ) + val judgement = SCProofChecker.checkSCProof(scProof) + if (!judgement.isValid) { + throw new AssertionError( + Seq( + "Error: the theorem was found to produce an invalid proof; this could indicate a problem with a tactic or a bug in the implementation", + "The produced proof is shown below for reference:", + Printer.prettySCProof(judgement) + ).mkString("\n") + ) + } + + val justificationPairs = scProof.imports.indices.map(justifiedImports).map(proven).map(_.head) + val justifications = justificationPairs.map { case (justification, _) => justification } + + val kernelJustifications = justificationPairs.map { case (_, kernelJustification) => kernelJustification } + val kernelTheorem = runningTheory.makeTheorem(s"t${proven.size}", scProof.conclusion, scProof, kernelJustifications) match { + case ValidJustification(result) => result + case InvalidJustification(_, _) => throw new Error // Should have been caught before + } + + val theorem = Theorem(this, sequent, scProof, justifications) + addOne(sequent, theorem, kernelTheorem) // TODO should we salvage existing theorems instead of creating new ones? + + theorem + } + def mkTheorem(proof: Proof): Theorem = { + require(proof.initialState.goals.sizeIs == 1, "The proof must start with exactly one goal") + val sequent = proof.initialState.goals.head + evaluateProof(proof)(this) match { + case Some(proofModeState) => + val (scProof, theoremImports) = reconstructSCProof(proofModeState) + addSequentToEnvironment(sequent, scProof, theoremImports) + case None => throw new Exception // Failure in evaluating the proof + } + } + def mkAxiom(formula: Formula): Axiom = { + require(runningTheory.isAxiom(formula)) + Axiom(this, formula) + } + // def mkDefinition() // TODO + def mkTheorem(sequent: Sequent, scProof: SCProof, theorems: IndexedSeq[Justified]): Theorem = + addSequentToEnvironment(sequent, scProof, theorems.map(_.sequent).zipWithIndex.map(_.swap).toMap) + // override def toString: String = proven.keySet.toSeq.map(Theorem(this, _)).map(_.toString).mkString("\n") + } + + def newEmptyEnvironment(): ProofEnvironment = new ProofEnvironment(new RunningTheory) + + /** + * A justified statement with respect to a theory is a sequent that is accepted by this theory. + */ + sealed abstract class Justified extends ReadableJustified { + private[proof] val environment: ProofEnvironment + def sequent: Sequent + final def sequentAsKernel: lisa.kernel.proof.SequentCalculus.Sequent = sequentToKernel(sequent) + } + + /** + * An axiom is a justified statement that is admitted without a proof. + * It is guaranteed that this sequent has exactly one conclusion and no assumptions. + * @param formula the original formula + */ + case class Axiom private[ProofEnvironmentDefinitions] (environment: ProofEnvironment, formula: Formula) extends Justified { + override def sequent: Sequent = () |- formula + override def toString: String = s"Axiom: $sequent" + } + + /** + * A theorem is a justified statement which has an associated proof depending on other justified statements. + * @param proof the proof of this theorem + * @param justifications the dependencies of this theorem (= assumptions) + */ + case class Theorem private[ProofEnvironmentDefinitions] (environment: ProofEnvironment, sequent: Sequent, proof: SCProof, justifications: IndexedSeq[Justified]) extends Justified { + override def toString: String = s"Theorem: $sequent" + } + + // Borrowed from past work: https://github.com/FlorianCassayre/competitive-scala + private def topologicalSort[U](start: U, adjacency: Map[U, Set[U]]): Seq[U] = { + def dfs(stack: Seq[(U, Set[U])], marks: Map[U, Boolean], sorted: Seq[U]): (Map[U, Boolean], Seq[U]) = { + stack match { + case (u, adjacent) +: tail => + adjacent.headOption match { + case Some(v) => + marks.get(v) match { + case Some(false) => throw new Exception // Cycle + case Some(true) => dfs((u, adjacent.tail) +: tail, marks, sorted) + case None => dfs((v, adjacency.getOrElse(v, Set.empty[U])) +: (u, adjacent.tail) +: tail, marks + (v -> false), sorted) + } + case None => dfs(tail, marks + (u -> true), u +: sorted) + } + case _ => (marks, sorted) + } + } + val (_, sorted) = dfs(Seq((start, adjacency.getOrElse(start, Set.empty[U]))), Map(start -> false), Seq.empty) + sorted + } + + /** + * Converts a theorem into a kernel proof where the imports are the assumption of that theorem. + * @param theorem the theorem to convert + * @return a kernel proof + */ + def reconstructPartialSCProofForTheorem(theorem: Theorem): SCProof = theorem.proof // (that's it) + + /** + * Converts a theorem into a kernel proof where the imports are all axioms of that theory. + * Essentially inlines all dependent theorems recursively into a single, fat proof. + * @param theorem the theorem to convert + * @return a kernel proof + */ + def reconstructSCProofForTheorem(theorem: Theorem): SCProof = { + // Inefficient, no need to traverse/reconstruct the whole graph + val environment = theorem.environment + val theorems = environment.proven.values + .flatMap(_.collect { case (theorem: Theorem, _) => + theorem + }) + .toSeq + val sortedTheorems = topologicalSort( + theorem, + theorems + .map(theorem => (theorem, theorem.justifications.collect { case other: Theorem => other }.toSet) // This will have to be updated for definitions + ) + .toMap + .withDefaultValue(Set.empty) + ).reverse + val sortedAxioms = sortedTheorems + .flatMap(_.justifications.collect { case ax: Axiom => ax }) + .toSet + .map(_.sequent) + .toIndexedSeq + .sortBy(_.toString) + val sequentToImport = sortedAxioms.zipWithIndex.toMap.view.mapValues(i => -(i + 1)).toMap + + assert(sortedTheorems.lastOption.contains(theorem)) + val sequentToIndex = sortedTheorems + .map(_.sequent) + .zipWithIndex + .reverse // This step is important: in case of duplicate nodes, this ensures we have no forward references + .toMap ++ sequentToImport + + assert(sortedTheorems.zipWithIndex.forall { case (thm, i) => thm.justifications.map(_.sequent).forall(s => sequentToIndex.get(s).exists(_ < i)) }) + + val scProof = SCProof( + sortedTheorems.map(theorem => SCSubproof(theorem.proof, theorem.justifications.map(_.sequent).map(sequentToIndex))).toIndexedSeq, + sortedAxioms.map(sequentToKernel) + ) + + assert(scProof.conclusion == sequentToKernel(theorem.sequent)) + + val judgement = SCProofChecker.checkSCProof(scProof) + if (!judgement.isValid) { + throw new AssertionError( + Seq( + "Error: the reconstructed proof was found to be invalid; this could indicate a bug in the implementation of this very method", + "The reconstructed proof is shown below for reference:", + Printer.prettySCProof(judgement) + ).mkString("\n") + ) + } + + val optimized = ProofsShrink.optimizeProofIteratively(scProof) + assert(SCProofChecker.checkSCProof(optimized).isValid) // Assertion failure means a bug in `SCUtils` + optimized + } + +} diff --git a/lisa-front/src/main/scala/lisa/front/proof/state/ProofInterfaceDefinitions.scala b/lisa-front/src/main/scala/lisa/front/proof/state/ProofInterfaceDefinitions.scala new file mode 100644 index 0000000000000000000000000000000000000000..f936a39504aa0db00ed26335abff5371931cfa66 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/proof/state/ProofInterfaceDefinitions.scala @@ -0,0 +1,76 @@ +package lisa.front.proof.state + +import lisa.front.printer.FrontPositionedPrinter + +trait ProofInterfaceDefinitions extends ProofEnvironmentDefinitions { + + private def prettyFrame(string: String, verticalPadding: Int = 0, horizontalPadding: Int = 2): String = { + val (space, vertical, horizontal, corner) = (' ', '|', '-', '+') + val lines = string.split("\n") + val maxLength = lines.map(_.length).max + val bottomAndTop = (corner +: Seq.fill(maxLength + 2 * horizontalPadding)(horizontal) :+ corner).mkString + val bottomAndTopMargin = (vertical +: Seq.fill(maxLength + 2 * horizontalPadding)(space) :+ vertical).mkString + val linesMargin = + lines.map(line => Seq(vertical) ++ Seq.fill(horizontalPadding)(space) ++ line.toCharArray ++ Seq.fill(maxLength - line.length + horizontalPadding)(space) ++ Seq(vertical)).map(_.mkString) + (Seq(bottomAndTop) ++ Seq.fill(verticalPadding)(bottomAndTopMargin) ++ linesMargin ++ Seq.fill(verticalPadding)(bottomAndTopMargin) ++ Seq(bottomAndTop)).mkString("\n") + } + + /** + * The proof mode represents an interface for [[ProofModeState]]. + * It is stateful, and as such should be mutated using the commands available, e.g. [[apply]]. + * It is interactive, in the sense that the command applications print information in the standard output. + * When no proof goal remains, a theorem can be obtained with [[asTheorem]]. + */ + case class ProofMode private (private var currentState: ProofModeState) { + def state: ProofState = currentState.state + def proving: ProofState = currentState.proving + def apply(mutator: ProofModeStateMutator): Boolean = { + print(s"Trying to apply '${mutator.getClass.getSimpleName}'...") + val result = mutator.applyMutator(currentState) match { + case Some(newState) => + println(" [ok]") + currentState = newState + true + case None => + println(" [!!! failure !!!]") + false + } + println() + println(prettyFrame(currentState.state.toString)) + println() + result + } + def focus(goal: Int): Boolean = apply(TacticFocusGoal(goal)) + def back(): Boolean = apply(CancelPreviousTactic) + def repeat(tactic: Tactic): Unit = apply(TacticRepeat(tactic)) + def applyOne(tactics: Tactic*): Boolean = apply(TacticFallback(tactics)) + def reset(): Unit = apply(CancelPreviousTactic) + def asTheorem(): Theorem = { + require(state.goals.isEmpty, "The proof is incomplete and thus cannot be converted into a theorem") + val env = currentState.environment + val theorem = env.mkTheorem(Proof(proving.goals*)(currentState.tactics*)) + theorem.display() + } + override def toString: String = + (Seq("subgoals:", currentState.state.toString) ++ Seq("proving:", currentState.proving.toString)).mkString("\n") + } + object ProofMode { + def apply(goals: Sequent*)(using environment: ProofEnvironment): ProofMode = { + val initial = ProofMode(initialProofModeState(goals*)(environment)) + println("Entering proof mode") + println() + println(prettyFrame(initial.state.toString)) + println() + initial + } + } + + extension [T <: Justified](justified: T) { + def display(): T = { + println(justified) + println() + justified + } + } + +} diff --git a/lisa-front/src/main/scala/lisa/front/proof/state/ProofStateDefinitions.scala b/lisa-front/src/main/scala/lisa/front/proof/state/ProofStateDefinitions.scala new file mode 100644 index 0000000000000000000000000000000000000000..ce1a6f481d15fd28403b0b3fe9808f87533f6e65 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/proof/state/ProofStateDefinitions.scala @@ -0,0 +1,364 @@ +package lisa.front.proof.state + +import lisa.front.fol.FOL.* +import lisa.front.proof.sequent.SequentDefinitions +import lisa.front.proof.sequent.SequentOps +import lisa.kernel.proof.SCProof +import lisa.kernel.proof.SequentCalculus.Rewrite +import lisa.kernel.proof.SequentCalculus.SCProofStep +import lisa.kernel.proof.SequentCalculus.SCSubproof +import lisa.utils.ProofsShrink + +trait ProofStateDefinitions extends SequentDefinitions with SequentOps { + + /** + * A proof in the front. + * It corresponds to an initial state represented a sequence of goals, and a sequence of tactics to be applied. + * Note that the tactics do not necessarily eliminate all goals. + * @param initialState the initial state + * @param steps the tactics + */ + case class Proof(initialState: ProofState, steps: Seq[Tactic]) + object Proof { + def apply(goals: Sequent*)(steps: Tactic*): Proof = Proof(ProofState(goals.toIndexedSeq), steps) + } + + /** + * The proof state is a sequence of proof goals, which are themselves sequents. + * @param goals the goals in this state + */ + final case class ProofState(goals: IndexedSeq[Sequent]) { + override def toString: String = + ((if (goals.nonEmpty) s"${goals.size} goal${if (goals.sizeIs > 1) "s" else ""}" else "[zero goals]") +: + goals.map(_.toString).map(s => s"- $s")).mkString("\n") + } + object ProofState { + def apply(goals: Sequent*): ProofState = ProofState(goals.toIndexedSeq) + } + + type MutatorResult = Option[ProofModeState] + private[ProofStateDefinitions] type TacticResult = Option[Seq[(AppliedTactic, ProofStateSnapshot)]] + + /** + * A general class that describes a mutation on the proof mode state. + */ + sealed abstract class ProofModeStateMutator { + def applyMutator(proofModeState: ProofModeState): MutatorResult + } + case object CancelPreviousTactic extends ProofModeStateMutator { + override def applyMutator(proofModeState: ProofModeState): MutatorResult = + proofModeState.steps match { + case _ +: previousSteps => + Some(proofModeState.copy(steps = previousSteps)) + case _ => None + } + } + case object ResetProofMode extends ProofModeStateMutator { + override def applyMutator(proofModeState: ProofModeState): MutatorResult = + Some(proofModeState.copy(steps = Seq.empty)) + } + + /** + * A tactic is a function that can transform a proof state into a new proof state. + * When applied it returns an [[AppliedTactic]] object along with the new state. + */ + sealed abstract class Tactic extends ProofModeStateMutator { + override final def applyMutator(proofModeState: ProofModeState): MutatorResult = + applySnapshot(proofModeState.lastSnapshot, proofModeState.environment).map(steps => proofModeState.withNewSteps(steps)) + private[ProofStateDefinitions] def applySnapshot(snapshot: ProofStateSnapshot, env: ProofEnvironment): TacticResult + } + case class TacticFocusGoal(goal: Int) extends Tactic { + override private[ProofStateDefinitions] def applySnapshot(snapshot: ProofStateSnapshot, env: ProofEnvironment): TacticResult = { + if (snapshot.proofState.goals.indices.contains(goal)) { + // This function moves the element of index `goal` to the front + def bringToFront[T](goals: IndexedSeq[T]): IndexedSeq[T] = + goals(goal) +: (goals.take(goal) ++ goals.drop(goal + 1)) + val newProofState = ProofState(bringToFront(snapshot.proofState.goals)) + val newShadowProofState = bringToFront(snapshot.shadowProofState) + Some( + Seq( + ( + AppliedTactic(-1, this, () => IndexedSeq.empty, false, Map.empty), + ProofStateSnapshot(newProofState, newShadowProofState, snapshot.nextId) + ) + ) + ) + } else { + None + } + } + } + case class TacticRepeat(tactic: Tactic) extends Tactic { + override private[ProofStateDefinitions] def applySnapshot(snapshot: ProofStateSnapshot, env: ProofEnvironment): TacticResult = { + def repeat(currentSnapshot: ProofStateSnapshot, acc: Seq[(AppliedTactic, ProofStateSnapshot)], executed: Boolean): TacticResult = { + tactic.applySnapshot(currentSnapshot, env) match { + case Some(seq) if seq.nonEmpty => + val reversed = seq.reverse + repeat(reversed.head._2, reversed ++ acc, true) + case _ => if (executed) Some(acc.reverse) else None + } + } + repeat(snapshot, Seq.empty, true) + } + } + case class TacticFallback(tactics: Seq[Tactic]) extends Tactic { + override private[ProofStateDefinitions] def applySnapshot(snapshot: ProofStateSnapshot, env: ProofEnvironment): TacticResult = { + def iteratedTry(remaining: Seq[Tactic]): TacticResult = remaining match { + case tactic +: tail => + tactic.applySnapshot(snapshot, env) match { + case Some(result) => Some(result) + case None => iteratedTry(tail) + } + case _ => None + } + iteratedTry(tactics) + } + } + case class TacticCombine(tactics: Seq[Tactic]) extends Tactic { + override private[ProofStateDefinitions] def applySnapshot(snapshot: ProofStateSnapshot, env: ProofEnvironment): TacticResult = { + def iterated(remaining: Seq[Tactic], currentSnapshot: ProofStateSnapshot, acc: Seq[(AppliedTactic, ProofStateSnapshot)]): TacticResult = remaining match { + case tactic +: tail => + tactic.applySnapshot(currentSnapshot, env) match { + case Some(result) => + val reversed = result.reverse + val newSnapshot = reversed.headOption match { + case Some((_, head)) => head + case None => currentSnapshot + } + iterated(tail, newSnapshot, reversed ++ acc) + case None => None + } + case _ => Some(acc.reverse) + } + iterated(tactics, snapshot, Seq.empty) + } + } + + /** + * A particular case of tactic that works on a single goal. + */ + sealed abstract class TacticGoal extends Tactic { + override private[ProofStateDefinitions] def applySnapshot(snapshot: ProofStateSnapshot, env: ProofEnvironment): TacticResult = { + (snapshot.proofState.goals, snapshot.shadowProofState) match { + case (proofGoal +: tailGoals, id +: tailShadowProofState) => + applyGoal(proofGoal, env) match { + case Some(opt) => + val (newGoalsOrJustifications, reconstruct) = opt.getOrElse((IndexedSeq.empty, () => IndexedSeq.empty)) + val newGoals = newGoalsOrJustifications.map { + case Left(sequent) => sequent + case Right(justified) => justified.sequent + } + val newGoalsShown = newGoalsOrJustifications.collect { case Left(sequent) => + sequent + } + // We prepend the newly created goals + val newProofState = ProofState(newGoalsShown ++ tailGoals) + // Number of goals that have been created (or updated), possibly zero + // This corresponds to the number of premises in that rule + val nReplacedGoals = newGoals.size + val newShadowGoals = snapshot.nextId until (snapshot.nextId + nReplacedGoals) + val newShadowGoalsShown = newShadowGoals.zip(newGoalsOrJustifications).collect { case (i, Left(_)) => i } + // Updated shadow proof state (= ids for the new proof state) + val newShadowProofState = newShadowGoalsShown ++ tailShadowProofState + // Since we created n new goals, we must increment the counter by n + val newNextId = snapshot.nextId + nReplacedGoals + + val justifications = newShadowGoals.zip(newGoalsOrJustifications).collect { case (i, Right(justified)) => (i, justified) }.toMap + + Some(Seq((AppliedTactic(id, this, reconstruct, opt.isEmpty, justifications), ProofStateSnapshot(newProofState, newShadowProofState, newNextId)))) + case None => None + } + case _ => None + } + } + def applyGoal(proofGoal: Sequent, env: ProofEnvironment): Option[Option[(IndexedSeq[Either[Sequent, Justified]], ReconstructSteps)]] + } + case class TacticApplyJustification(justified: Justified) extends TacticGoal { + override def applyGoal(proofGoal: Sequent, env: ProofEnvironment): Option[Option[(IndexedSeq[Either[Sequent, Justified]], ReconstructSteps)]] = { + if (justified.environment == env && justified.sequent == proofGoal && env.contains(proofGoal)) { + Some(None) + } else { + None + } + } + } + + type ReconstructSteps = () => IndexedSeq[SCProofStep] + + // The premises indexing is implicit: + // * 0, 1, 2 will reference respectively the first, second and third steps in that array + + abstract class TacticGoalFunctionalPruning extends TacticGoal { + override def applyGoal(proofGoal: Sequent, env: ProofEnvironment): Option[Option[(IndexedSeq[Either[Sequent, Justified]], ReconstructSteps)]] = + apply(proofGoal).map(result => Some(result)) + def apply(proofGoal: Sequent): Option[(IndexedSeq[Either[Sequent, Justified]], ReconstructSteps)] + } + + abstract class TacticGoalFunctional extends TacticGoal { + override def applyGoal(proofGoal: Sequent, env: ProofEnvironment): Option[Option[(IndexedSeq[Either[Sequent, Justified]], ReconstructSteps)]] = + apply(proofGoal).map { case (sequent, reconstruct) => Some((sequent.map(Left.apply), reconstruct)) } + def apply(proofGoal: Sequent): Option[(IndexedSeq[Sequent], ReconstructSteps)] + } + + trait ReadableProofEnvironment { + def contains(sequent: Sequent): Boolean + def belongsToTheory(sequent: SequentBase): Boolean + } + + type ProofEnvironment <: ReadableProofEnvironment + + trait ReadableJustified { + private[proof] def environment: ProofEnvironment + def sequent: Sequent + } + type Justified <: ReadableJustified + + private[ProofStateDefinitions] case class ProofStateSnapshot( + proofState: ProofState, + shadowProofState: IndexedSeq[Int], + nextId: Int + ) + + private[ProofStateDefinitions] case class AppliedTactic(id: Int, tactic: Tactic, reconstruct: ReconstructSteps, isTheorem: Boolean, toClose: Map[Int, Justified]) + + /** + * The proof mode state represents a backward proof builder. + * It is initialized by specifying a sequent (the starting goal). + * Applied tactics may be appended using the method [[withNewSteps]]. + * See [[reconstructSCProof]] for the conversion of this object into a kernel proof. + * @param initialSnapshot + * @param steps + * @param environment + */ + case class ProofModeState private[ProofStateDefinitions] ( + private[ProofStateDefinitions] val initialSnapshot: ProofStateSnapshot, + private[ProofStateDefinitions] val steps: Seq[Seq[(AppliedTactic, ProofStateSnapshot)]], // Steps are in reverse direction (the first element is the latest) + environment: ProofEnvironment + ) { + private[ProofStateDefinitions] def lastSnapshot: ProofStateSnapshot = + steps.view.flatMap(_.lastOption).headOption.map { case (_, snapshot) => snapshot }.getOrElse(initialSnapshot) + private[ProofStateDefinitions] def zippedSteps: Seq[(ProofStateSnapshot, AppliedTactic, ProofStateSnapshot)] = { + val flatSteps = steps.flatMap(_.reverse) + val snapshots = flatSteps.map { case (_, snapshot) => snapshot } :+ initialSnapshot + snapshots.zip(snapshots.tail).zip(flatSteps.map { case (applied, _) => applied }).map { case ((snapshotAfter, snapshotBefore), applied) => + (snapshotBefore, applied, snapshotAfter) + } + } + private[ProofStateDefinitions] def withNewSteps(step: Seq[(AppliedTactic, ProofStateSnapshot)]): ProofModeState = + copy(steps = step +: steps) + + def state: ProofState = lastSnapshot.proofState + def proving: ProofState = initialSnapshot.proofState + def tactics: Seq[Tactic] = steps.reverse.flatten.map { case (AppliedTactic(_, tactic, _, _, _), _) => tactic } + } + + /** + * Evaluates a proof by converting tactics to applied tactics. + * @param proof the proof to evaluate + * @param environment the environment + * @return an optional final proof mode state after applying all the tactics + */ + def evaluateProof(proof: Proof)(environment: ProofEnvironment): Option[ProofModeState] = { + def applyTactics(tactics: Seq[Tactic], proofModeState: ProofModeState): Option[ProofModeState] = tactics match { + case tactic +: rest => + tactic.applyMutator(proofModeState) match { + case Some(newProofModeState) => applyTactics(rest, newProofModeState) + case None => None + } + case _ => Some(proofModeState) + } + applyTactics(proof.steps, initialProofModeState(proof.initialState.goals*)(environment)) + } + + /** + * Reconstructs a kernel proof from an instance of a [[ProofModeState]]. + * The passed mode can still contain goals, in that case they be included as imports. + * @param proofModeState the proof mode to use + * @return the final proof, and a mapping from imports to the theorems used + */ + def reconstructSCProof(proofModeState: ProofModeState): (SCProof, Map[Int, Sequent]) = { + val proofEnvironment = proofModeState.environment + // Each proof goal that is created (or updated) will be given a unique id + // Then we use these ids to refer to them as a proof step in the SC proof + + // For a complete proof the proof state should be empty + // However for testing purposes we may still allow incomplete proofs to exist, + // and for that we should convert unclosed branches into imports + val imports = proofModeState.lastSnapshot.proofState.goals.map(sequentToKernel) + val initialTranslation = proofModeState.lastSnapshot.shadowProofState.zipWithIndex.map { case (v, i) => v -> -(i + 1) }.toMap + + val (finalProof, _, finalTheorems) = proofModeState.zippedSteps.foldLeft((SCProof(IndexedSeq.empty, imports), initialTranslation, Map.empty[Int, Sequent])) { + case ((proof, translation, theorems), (snapshotBefore, applied, snapshotAfter)) => + val reconstructedSteps = applied.reconstruct() + val isTheorem = applied.isTheorem + val nReplacedGoals = snapshotAfter.nextId - snapshotBefore.nextId // TODO do not rely on the ids for that + val id = applied.id // TODO + val updatedGoal = snapshotBefore.proofState.goals.head + + val sortedClosed = applied.toClose.toSeq.sortBy(_._1) + val newTheorems = theorems ++ sortedClosed.zipWithIndex.map { case ((_, justified), i) => + (proof.imports.size + i) -> justified.sequent + }.toMap + val newTranslation = translation ++ sortedClosed.zipWithIndex.map { case ((id, _), j) => + id -> -(proof.imports.size + j + 1) + } + val newImports = proof.imports ++ sortedClosed.map(_._2.sequent).map(sequentToKernel) + val newProof0 = proof.copy(imports = newImports) + + val premises = (snapshotBefore.nextId until snapshotAfter.nextId).map(newTranslation) + val reconstructedAndRemappedStep = + if (reconstructedSteps.nonEmpty) + Some( + SCSubproof( + SCProof(reconstructedSteps, premises.map(newProof0.getSequent)), + premises + ) + ) + else + None + val newProof = newProof0.withNewSteps(reconstructedAndRemappedStep.toIndexedSeq) + + // We return the expanded proof, along with the information to recover the last (= current) step as a premise + if (isTheorem) { + val importId = newProof.imports.size + val translatedId = -(importId + 1) + ( + newProof.copy(imports = newProof.imports :+ sequentToKernel(updatedGoal)), + newTranslation + (id -> translatedId), + newTheorems + (importId -> updatedGoal) + ) + } else { + val translatedId = newProof.steps.size - 1 + ( + newProof, + newTranslation + (id -> translatedId), + newTheorems + ) + } + } + + (ProofsShrink.flattenProof(finalProof), finalTheorems) + } + + // The final conclusion is given the id 0, although it will never be referenced as a premise + def initialProofModeState(goals: Sequent*)(environment: ProofEnvironment): ProofModeState = { + require(goals.forall(isAcceptedSequent(_)(environment))) + ProofModeState(ProofStateSnapshot(ProofState(goals*), 0 until goals.size, goals.size), Seq.empty, environment) + } + + def isAcceptedSequent(sequent: Sequent)(environment: ProofEnvironment): Boolean = { + isSequentWellFormed(sequent) && schematicConnectorsOfSequent(sequent).isEmpty && environment.belongsToTheory(sequent) // TODO is printable + } + + /** + * A helper module that provides common symbols for usage in rules. + */ + object Notations { + val (a, b, c, d, e) = (SchematicPredicateLabel[0]("a"), SchematicPredicateLabel[0]("b"), SchematicPredicateLabel[0]("c"), SchematicPredicateLabel[0]("d"), SchematicPredicateLabel[0]("e")) + val (s, t, u) = (SchematicTermLabel[0]("s"), SchematicTermLabel[0]("t"), SchematicTermLabel[0]("u")) + val f: SchematicConnectorLabel[1] = SchematicConnectorLabel[1]("f") + val p: SchematicPredicateLabel[1] = SchematicPredicateLabel[1]("p") + val (x, y) = (VariableLabel("x"), VariableLabel("y")) + } + +} diff --git a/lisa-front/src/main/scala/lisa/front/proof/state/RuleDefinitions.scala b/lisa-front/src/main/scala/lisa/front/proof/state/RuleDefinitions.scala new file mode 100644 index 0000000000000000000000000000000000000000..114edf044bd946839a83693bc41824f01082c761 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/proof/state/RuleDefinitions.scala @@ -0,0 +1,168 @@ +package lisa.front.proof.state + +import lisa.front.fol.FOL.* +import lisa.front.proof.unification.UnificationUtils +import lisa.kernel.proof.SequentCalculus.SCProofStep +import lisa.kernel.proof.SequentCalculus.SCSubproof + +import scala.collection.View + +trait RuleDefinitions extends ProofEnvironmentDefinitions with UnificationUtils { + + type ReconstructRule = PartialFunction[(lisa.kernel.proof.SequentCalculus.Sequent, UnificationContext), IndexedSeq[SCProofStep]] + + /** + * The parameters to instantiate a rule into a tactic (see [[RuleTactic]]). + * @param selectors the correspondence between patterns and values, can be partial + * @param functions a partial assignment of functions + * @param predicates a partial assignment of predicates + * @param connectors a partial assignment of connectors + * @param variables a partial assignment of free variables + */ + case class RuleParameters( + selectors: Map[Int, SequentSelector] = Map.empty, + functions: Seq[AssignedFunction] = Seq.empty, + predicates: Seq[AssignedPredicate] = Seq.empty, + connectors: Seq[AssignedConnector] = Seq.empty, + variables: Map[VariableLabel, VariableLabel] = Map.empty + ) { + def withIndices(i: Int)(left: Int*)(right: Int*): RuleParameters = { + val pair = (left.toIndexedSeq, right.toIndexedSeq) + copy(selectors = selectors + (i -> pair)) + } + + def withFunction[N <: Arity]( + label: SchematicTermLabel[N], + f: FillArgs[SchematicTermLabel[0], N] => Term + )(using ValueOf[N]): RuleParameters = + copy(functions = functions :+ AssignedFunction(label, LambdaFunction[N](f))) + def withFunction(label: SchematicTermLabel[0], value: Term): RuleParameters = + withFunction(label, _ => value) + + def withPredicate[N <: Arity]( + label: SchematicPredicateLabel[N], + f: FillArgs[SchematicTermLabel[0], N] => Formula + )(using ValueOf[N]): RuleParameters = copy(predicates = predicates :+ AssignedPredicate(label, LambdaPredicate(f))) + def withPredicate(label: SchematicPredicateLabel[0], value: Formula): RuleParameters = + withPredicate(label, _ => value) + + def withConnector[N <: Arity]( + label: SchematicConnectorLabel[N], + f: FillArgs[SchematicPredicateLabel[0], N] => Formula + )(using ValueOf[N]): RuleParameters = { + require(label.arity > 0, "For consistency, use nullary predicate schemas instead of connectors") + copy(connectors = connectors :+ AssignedConnector(label, LambdaConnector(f))) + } + + def withVariable(label: VariableLabel, value: VariableLabel): RuleParameters = + copy(variables = variables + (label -> value)) + } + object RuleParameters { + def apply(args: (AssignedFunction | AssignedPredicate | AssignedConnector | (VariableLabel, VariableLabel))*): RuleParameters = + args.foldLeft(new RuleParameters())((acc, e) => + e match { + case assigned: AssignedFunction => acc.copy(functions = acc.functions :+ assigned) + case assigned: AssignedPredicate => acc.copy(predicates = acc.predicates :+ assigned) + case assigned: AssignedConnector => acc.copy(connectors = acc.connectors :+ assigned) + case pair @ (_: VariableLabel, _: VariableLabel) => acc.copy(variables = acc.variables + pair) + } + ) + } + + protected def applyRuleInference( + parameters: RuleParameters, + patternsFrom: IndexedSeq[PartialSequent], + patternsTo: IndexedSeq[PartialSequent], + valuesFrom: IndexedSeq[Sequent] + ): Option[(IndexedSeq[Sequent], UnificationContext)] = { + def parametersView: View[IndexedSeq[SequentSelector]] = + if (patternsFrom.size == valuesFrom.size) { + matchIndices(parameters.selectors, patternsFrom, valuesFrom) + } else { + View.empty + } + + parametersView.flatMap { selectors => + val ctx = UnificationContext( + parameters.predicates.map(r => r.schema -> r.lambda).toMap, + parameters.functions.map(r => r.schema -> r.lambda).toMap, + parameters.connectors.map(r => r.schema -> r.lambda).toMap + ) + unifyAndResolve(patternsFrom, valuesFrom, patternsTo, ctx, selectors) + }.headOption + } + + /** + * An instantiated rule. Note that the parameters can be incorrect, in that case the tactic will always fail. + * @param rule the original rule + * @param parameters the parameters used for the instantiation + */ + case class RuleTactic private[RuleDefinitions] (rule: Rule, parameters: RuleParameters) extends TacticGoalFunctional { + override def apply(proofGoal: Sequent): Option[(IndexedSeq[Sequent], ReconstructSteps)] = { + applyRuleInference(parameters, IndexedSeq(rule.conclusion), rule.hypotheses, IndexedSeq(proofGoal)).flatMap { case (newGoals, ctx) => + val stepsOption = rule.reconstruct.andThen(Some.apply).applyOrElse((proofGoal, ctx), _ => None) + stepsOption.map(steps => (newGoals, () => steps)) + } + } + + override def toString: String = s"${rule.getClass.getSimpleName}(...)" + } + + /** + * An rule is an object specifying a type of transformation on a justified statement or a proof goal. + * It is characterized by a sequence of premises (also known as hypotheses) and a conclusion; all patterns. + * It must also define a reconstruction function, in order to translate it to kernel proof steps. + */ + sealed abstract class Rule { + def hypotheses: IndexedSeq[PartialSequent] + def conclusion: PartialSequent + + def reconstruct: ReconstructRule + + require(isLegalPatterns(hypotheses) && isLegalPatterns(IndexedSeq(conclusion))) + + final def apply(parameters: RuleParameters = RuleParameters()): RuleTactic = + RuleTactic(this, parameters) + + final def apply(justification0: Justified, rest: Justified*): Option[Theorem] = + apply(RuleParameters())((justification0 +: rest)*)(using justification0.environment) + /*final def apply(parameters: RuleParameters)(justification0: Justified, rest: Justified*): Option[Theorem] = { + val env = justification0.environment + val justifications = justification0 +: rest + apply(parameters)(justifications: _*)(using env) + }*/ + final def apply(parameters: RuleParameters)(using env: ProofEnvironment): Option[Theorem] = + apply(parameters)() + + final def apply(parameters: RuleParameters)(justifications: Justified*)(using env: ProofEnvironment): Option[Theorem] = { + val justificationsSeq = justifications.toIndexedSeq + val topSequents = justificationsSeq.map(_.sequent) + applyRuleInference(parameters, hypotheses, IndexedSeq(conclusion), topSequents).flatMap { + case (IndexedSeq(newSequent), ctx) => + reconstruct.andThen(Some.apply).applyOrElse((newSequent, ctx), _ => None).map { scSteps => + val scProof = lisa.kernel.proof.SCProof(scSteps, justificationsSeq.map(_.sequentAsKernel)) + env.mkTheorem(newSequent, scProof, justificationsSeq) + } + case _ => throw new Error + } + } + + override def toString: String = { + val top = hypotheses.map(_.toString).mkString(" " * 6) + val bottom = conclusion.toString + val length = Math.max(top.length, bottom.length) + + def pad(s: String): String = " " * ((length - s.length) / 2) + s + + Seq(pad(top), "=" * length, pad(bottom)).mkString("\n") + } + } + + /** + * A constructor for [[Rule]]. + */ + open class RuleBase(override val hypotheses: IndexedSeq[PartialSequent], override val conclusion: PartialSequent, override val reconstruct: ReconstructRule) extends Rule + + given Conversion[Rule, RuleTactic] = _() + +} diff --git a/lisa-front/src/main/scala/lisa/front/proof/unification/UnificationDefinitions.scala b/lisa-front/src/main/scala/lisa/front/proof/unification/UnificationDefinitions.scala new file mode 100644 index 0000000000000000000000000000000000000000..ffdf7d690ee9d134275fdaf9e423d50ff359002a --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/proof/unification/UnificationDefinitions.scala @@ -0,0 +1,51 @@ +package lisa.front.proof.unification + +import lisa.front.fol.FOL.* + +trait UnificationDefinitions { + + /** + * An assignment for a unification problem instance. + * @param predicates the assigned predicates + * @param functions the assigned functions + * @param connectors the assigned connectors + * @param variables the assigned variables + */ + case class UnificationContext( + predicates: Map[SchematicPredicateLabel[?], LambdaPredicate[?]] = Map.empty, + functions: Map[SchematicTermLabel[?], LambdaFunction[?]] = Map.empty, + connectors: Map[SchematicConnectorLabel[?], LambdaConnector[?]] = Map.empty, + variables: Map[VariableLabel, VariableLabel] = Map.empty + ) { + infix def +(predicate: AssignedPredicate): UnificationContext = copy(predicates = predicates + (predicate.schema -> predicate.lambda)) + infix def +(function: AssignedFunction): UnificationContext = copy(functions = functions + (function.schema -> function.lambda)) + infix def +(connector: AssignedConnector): UnificationContext = copy(connectors = connectors + (connector.schema -> connector.lambda)) + infix def +(pair: (VariableLabel, VariableLabel)): UnificationContext = copy(functions = functions + (pair._1 -> pair._2)) + + def apply[N <: Arity](predicate: SchematicPredicateLabel[N]): LambdaPredicate[N] = predicates(predicate).asInstanceOf[LambdaPredicate[N]] + def apply[N <: Arity](function: SchematicTermLabel[N]): LambdaFunction[N] = functions(function).asInstanceOf[LambdaFunction[N]] + def apply[N <: Arity](connector: SchematicConnectorLabel[N]): LambdaConnector[N] = connectors(connector).asInstanceOf[LambdaConnector[N]] + + def apply(predicate: SchematicPredicateLabel[0]): Formula = predicates(predicate).body + def apply(function: SchematicTermLabel[0]): Term = functions(function).body + + def assignedPredicates: Seq[AssignedPredicate] = predicates.map { case (k, v) => AssignedPredicate.unsafe(k, v) }.toSeq + def assignedFunctions: Seq[AssignedFunction] = functions.map { case (k, v) => AssignedFunction.unsafe(k, v) }.toSeq + def assignedConnectors: Seq[AssignedConnector] = connectors.map { case (k, v) => AssignedConnector.unsafe(k, v) }.toSeq + } + + /** + * A helper object that represents a renaming. + * @param predicates the renamed predicates + * @param functions the renamed functions + * @param connectors the renamed connectors + * @param variables the renamed free variables + */ + case class RenamingContext( + predicates: Seq[RenamedPredicateSchema] = Seq.empty, + functions: Seq[RenamedFunctionSchema] = Seq.empty, + connectors: Seq[RenamedConnectorSchema] = Seq.empty, + variables: Map[VariableLabel, VariableLabel] = Map.empty + ) + +} diff --git a/lisa-front/src/main/scala/lisa/front/proof/unification/UnificationUtils.scala b/lisa-front/src/main/scala/lisa/front/proof/unification/UnificationUtils.scala new file mode 100644 index 0000000000000000000000000000000000000000..fd14e4aa9b5e484b43223205b5de5a6ece08e232 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/proof/unification/UnificationUtils.scala @@ -0,0 +1,590 @@ +package lisa.front.proof.unification + +import lisa.front.fol.FOL.* +import lisa.front.proof.sequent.SequentDefinitions + +import scala.collection.View + +trait UnificationUtils extends UnificationDefinitions with SequentDefinitions { + + /** + * Whether a collection of patterns is legal (e.g. no malformed formulas, no clashing variables, ...) + * @param patterns the patterns to check + * @return whether the patterns are legal or not + */ + def isLegalPatterns(patterns: IndexedSeq[PartialSequent]): Boolean = { + lazy val boundVariables = patterns.flatMap(declaredBoundVariablesOfSequent) + + // Applications match arity, no clashing bound variable patterns + lazy val noMalformedFormulas = patterns.forall(isSequentWellFormed) + // Declared variable patterns must have a globally unique name + lazy val noClashingBoundVariablePatterns = boundVariables.distinct.size == boundVariables.size + // Free variables should not reuse a name of a bound variable + lazy val noConflictingBoundFreeVariables = boundVariables.intersect(patterns.flatMap(freeVariablesOfSequent)).isEmpty + + noMalformedFormulas && noClashingBoundVariablePatterns && noConflictingBoundFreeVariables + } + + /** + * Inflates patterns using a context. + * @param patternsTo the patterns to inflate + * @param valuesFrom the values on the other side + * @param ctx the assignment + * @param indices the indices of the formulas that have been matched in the values + * @return the inflated values + */ + private def inflateValues( + patternsTo: IndexedSeq[PartialSequent], + valuesFrom: IndexedSeq[Sequent], + ctx: UnificationContext, + indices: IndexedSeq[(IndexedSeq[Int], IndexedSeq[Int])] + ): IndexedSeq[Sequent] = { + def removeIndices[T](array: IndexedSeq[T], indices: Seq[Int]): IndexedSeq[T] = { + val set = indices.toSet + for { + (v, i) <- array.zipWithIndex + if !set.contains(i) + } yield v + } + + def instantiate(formulas: IndexedSeq[Formula]): IndexedSeq[Formula] = + formulas.map(formula => + instantiateFormulaSchemas(unsafeRenameVariables(formula, ctx.variables), functions = ctx.assignedFunctions, predicates = ctx.assignedPredicates, connectors = ctx.assignedConnectors) + ) + + def createValueTo(common: IndexedSeq[Formula], pattern: IndexedSeq[Formula], partial: Boolean): IndexedSeq[Formula] = { + val instantiated = instantiate(pattern) + if (partial) common ++ instantiated else instantiated + } + + val (commonLeft, commonRight) = { + indices + .zip(valuesFrom) + .map { case ((indicesLeft, indicesRight), Sequent(valueLeft, valueRight)) => // Union all + (removeIndices(valueLeft, indicesLeft), removeIndices(valueRight, indicesRight)) + } + .foldLeft((IndexedSeq.empty[Formula], IndexedSeq.empty[Formula])) { case ((accL, accR), ((ls, rs))) => + (accL ++ ls.diff(accL), accR ++ rs.diff(accR)) + } + } + + val newValues = patternsTo.map(patternTo => + Sequent( + createValueTo(commonLeft, patternTo.left, patternTo.partialLeft), + createValueTo(commonRight, patternTo.right, patternTo.partialRight) + ) + ) + + newValues + } + + private type Constraints = Seq[Constraint] + private type ConstraintsResult = Option[Constraints] + + private type Context = Set[(VariableLabel, VariableLabel)] + + /** + * A constraint represents an equation between a label and a pattern. + * The constraint can be resolved as soon as the pattern can be fully instantiated to a value. + */ + private enum Constraint { + case SchematicFunction(label: SchematicTermLabel[?], args: Seq[Term], value: Term, ctx: Context) + case SchematicPredicate(label: SchematicPredicateLabel[?], args: Seq[Term], value: Formula, ctx: Context) + case SchematicConnector(label: SchematicConnectorLabel[?], args: Seq[Formula], value: Formula, ctx: Context) + case Variable(pattern: VariableLabel, value: VariableLabel) + } + import Constraint.* + + private val empty: ConstraintsResult = Some(Seq.empty) + private def merge(o1: ConstraintsResult, o2: ConstraintsResult): ConstraintsResult = + for { + c1 <- o1 + c2 <- o2 + } yield c1 ++ c2 + private def collectRecursiveTerm( + pattern: Term, + value: Term, + valuesFunctions: Set[SchematicTermLabel[?]], + valuesVariables: Set[VariableLabel] + )(using ctx: Context): ConstraintsResult = (pattern, value) match { + case (Term(labelPattern: TermLabel[?], argsPattern), Term(labelValue: TermLabel[?], argsValue)) if labelPattern == labelValue => + argsPattern.zip(argsValue).map { case (argPattern, argValue) => collectRecursiveTerm(argPattern, argValue, valuesFunctions, valuesVariables) }.fold(empty)(merge) + case (Term(labelPattern: SchematicTermLabel[?], argsPattern), _) if !valuesFunctions.contains(labelPattern) => + Some(Seq(SchematicFunction(labelPattern, argsPattern, value, ctx))) + case (VariableTerm(labelPattern), VariableTerm(labelValue)) => + if (valuesVariables.contains(labelPattern)) { + if (labelPattern == labelValue) { + Some(Seq.empty) + } else { + None + } + } else if (ctx.contains((labelPattern, labelValue))) { // Bound variable + Some(Seq(Variable(labelPattern, labelValue))) + } else if (ctx.forall { case (p, v) => p != labelPattern && v != labelValue }) { // Free variable + Some(Seq(Variable(labelPattern, labelValue))) // TODO merge these branches + } else { + None + } + case _ => None + } + private def collectRecursiveFormula( + pattern: Formula, + value: Formula, + valuesFunctions: Set[SchematicTermLabel[?]], + valuesPredicates: Set[SchematicPredicateLabel[?]], + valuesConnectors: Set[SchematicConnectorLabel[?]], + valuesVariables: Set[VariableLabel] + )(using ctx: Set[(VariableLabel, VariableLabel)]): ConstraintsResult = (pattern, value) match { + case (PredicateFormula(labelPattern: PredicateLabel[?], argsPattern), PredicateFormula(labelValue: PredicateLabel[?], argsValue)) if labelPattern == labelValue => + argsPattern.zip(argsValue).map { case (argPattern, argValue) => collectRecursiveTerm(argPattern, argValue, valuesFunctions, valuesVariables) }.fold(empty)(merge) + case (PredicateFormula(labelPattern: SchematicPredicateLabel[?], argsPattern), _) if !valuesPredicates.contains(labelPattern) => + Some(Seq(SchematicPredicate(labelPattern, argsPattern, value, ctx))) + case (ConnectorFormula(labelPattern: ConnectorLabel[?], argsPattern), ConnectorFormula(labelValue: ConnectorLabel[?], argsValue)) if labelPattern == labelValue => + argsPattern + .zip(argsValue) + .map { case (argPattern, argValue) => collectRecursiveFormula(argPattern, argValue, valuesFunctions, valuesPredicates, valuesConnectors, valuesVariables) } + .fold(empty)(merge) + case (ConnectorFormula(labelPattern: SchematicConnectorLabel[?], argsPattern), _) if !valuesConnectors.contains(labelPattern) => + Some(Seq(SchematicConnector(labelPattern, argsPattern, value, ctx))) + case (BinderFormula(labelPattern, boundPattern, innerPattern), BinderFormula(labelValue, boundValue, innerValue)) if labelPattern == labelValue => + collectRecursiveFormula(innerPattern, innerValue, valuesFunctions, valuesPredicates, valuesConnectors, valuesVariables)(using ctx + ((boundPattern, boundValue))) + .map(Variable(boundPattern, boundValue) +: _) + case _ => None + } + + private def collect( + patternsAndValues: IndexedSeq[(Formula, Formula)], + valuesFunctions: Set[SchematicTermLabel[?]], + valuesPredicates: Set[SchematicPredicateLabel[?]], + valuesConnectors: Set[SchematicConnectorLabel[?]], + valuesVariables: Set[VariableLabel] + ): ConstraintsResult = + patternsAndValues.map { case (pattern, value) => collectRecursiveFormula(pattern, value, valuesFunctions, valuesPredicates, valuesConnectors, valuesVariables)(using Set.empty) }.fold(empty)(merge) + + private def unifyFromConstraints( + constraints: Constraints, + partialAssignment: UnificationContext, + valueFunctions: Set[SchematicTermLabel[?]], + valuePredicates: Set[SchematicPredicateLabel[?]], + valueConnectors: Set[SchematicConnectorLabel[?]], + valueVariables: Set[VariableLabel] + ): Option[UnificationContext] = { + if (constraints.nonEmpty) { + def isSolvableTerm(pattern: Term)(using ctx: Set[VariableLabel]): Boolean = pattern match { + case VariableTerm(label) => valueVariables.contains(label) || partialAssignment.variables.contains(label) + case Term(_: ConstantFunctionLabel[?], args) => args.forall(isSolvableTerm) + case Term(schematic: SchematicTermLabel[?], args) => (valueFunctions.contains(schematic) || partialAssignment.functions.contains(schematic)) && args.forall(isSolvableTerm) + case _ => false + } + def isSolvableFormula(pattern: Formula)(using ctx: Set[VariableLabel]): Boolean = pattern match { + case PredicateFormula(_: ConstantPredicateLabel[?], args) => args.forall(isSolvableTerm) + case PredicateFormula(schematic: SchematicPredicateLabel[?], args) => (valuePredicates.contains(schematic) || partialAssignment.predicates.contains(schematic)) && args.forall(isSolvableTerm) + case ConnectorFormula(_: ConstantConnectorLabel[?], args) => args.forall(isSolvableFormula) + case ConnectorFormula(schematic: SchematicConnectorLabel[?], args) => + (valueConnectors.contains(schematic) || partialAssignment.connectors.contains(schematic)) && args.forall(isSolvableFormula) + case BinderFormula(_, bound, inner) => valueVariables.contains(bound) && isSolvableFormula(inner)(using ctx + bound) + case _ => false + } + + // This function tries to factor out all occurrences of `args._2` into `args._1` within `term`, and will store the result in `assignment` + def greedyFactoringFunction(term: Term, args: IndexedSeq[(SchematicTermLabel[0], Term)], assignment: Map[SchematicTermLabel[0], Term]): (Term, Map[SchematicTermLabel[0], Term]) = { + args.find { case (_, t) => isSame(term, instantiateTermPartial(t)) } match { + case Some((variable, value)) => (variable, if (!assignment.contains(variable)) assignment + (variable -> value) else assignment) + case None => + term match { + case Term(label, fArgs) => + val (finalArgs, finalAssignment) = fArgs.foldLeft((Seq.empty[Term], assignment)) { case ((argsAcc, currentAssignment), arg) => + val (newTerm, newAssignment) = greedyFactoringFunction(arg, args, currentAssignment) + (argsAcc :+ newTerm, newAssignment) + } + (Term.unsafe(label, finalArgs), finalAssignment) + } + } + } + + def greedyFactoringPredicate(formula: Formula, args: IndexedSeq[(SchematicTermLabel[0], Term)], assignment: Map[SchematicTermLabel[0], Term]): (Formula, Map[SchematicTermLabel[0], Term]) = { + formula match { + case PredicateFormula(label, fArgs) => + val (finalAssignment, finalFArgs) = fArgs.foldLeft((assignment, Seq.empty[Term])) { case ((currentAssignment, currentFArgs), a) => + val (newA, newAssignment) = greedyFactoringFunction(a, args, currentAssignment) + (newAssignment, currentFArgs :+ newA) + } + (PredicateFormula.unsafe(label, finalFArgs), finalAssignment) + case ConnectorFormula(label, fArgs) => + val (finalArgs, finalAssignment) = fArgs.foldLeft((Seq.empty[Formula], assignment)) { case ((argsAcc, currentAssignment), arg) => + val (newFormula, newAssignment) = greedyFactoringPredicate(arg, args, currentAssignment) + (argsAcc :+ newFormula, newAssignment) + } + (ConnectorFormula.unsafe(label, finalArgs), finalAssignment) + case BinderFormula(label, bound, inner) => + val (factoredInner, finalAssignment) = greedyFactoringPredicate(inner, args, assignment) + (BinderFormula(label, bound, factoredInner), finalAssignment) + } + } + + def greedyFactoringConnector( + formula: Formula, + args: IndexedSeq[(SchematicPredicateLabel[0], Formula)], + assignment: Map[SchematicPredicateLabel[0], Formula] + ): (Formula, Map[SchematicPredicateLabel[0], Formula]) = { + args.find { case (_, f) => isSame(formula, instantiateFormulaPartial(f)) } match { + case Some((variable, value)) => (variable, if (!assignment.contains(variable)) assignment + (variable -> value) else assignment) + case None => + formula match { + case _: PredicateFormula => (formula, assignment) // Identity + case ConnectorFormula(label, fArgs) => + val (finalArgs, finalAssignment) = fArgs.foldLeft((Seq.empty[Formula], assignment)) { case ((argsAcc, currentAssignment), arg) => + val (newFormula, newAssignment) = greedyFactoringConnector(arg, args, currentAssignment) + (argsAcc :+ newFormula, newAssignment) + } + (ConnectorFormula.unsafe(label, finalArgs), finalAssignment) + case BinderFormula(label, bound, inner) => + val (factoredInner, finalAssignment) = greedyFactoringConnector(inner, args, assignment) + (BinderFormula(label, bound, factoredInner), finalAssignment) + } + } + } + + def instantiateTermPartial(term: Term): Term = + instantiateTermSchemas(term, partialAssignment.assignedFunctions) + def instantiateFormulaPartial(formula: Formula): Formula = + instantiateFormulaSchemas(formula, partialAssignment.assignedFunctions, partialAssignment.assignedPredicates, partialAssignment.assignedConnectors) + + def isFormulaBodyNoBoundVariables(body: Formula, ctx: Context): Boolean = + ctx.map(_._2).intersect(freeVariablesOf(body)).isEmpty + def isTermBodyNoBoundVariables(body: Term, ctx: Context): Boolean = + ctx.map(_._2).intersect(freeVariablesOf(body)).isEmpty + + // The method tries to resolve a constraint and returns two nested options: + // * None => the constraint is unsolvable (e.g. too many degrees of freedom) + // * Some(None) => there is a contradiction + def handler(constraint: Constraint): Option[Option[(Constraints, UnificationContext)]] = constraint match { + case SchematicFunction(label, args, value, ctx) if partialAssignment.functions.contains(label) => + val lambda = partialAssignment.functions(label) + if (!isTermBodyNoBoundVariables(lambda.body, ctx)) { + // All the bound variables must appear in a way or another as arguments of this lambda + Some(None) + } else if (isSame(value, lambda.unsafe(args.map(instantiateTermPartial)))) { + Some(Some((IndexedSeq.empty, partialAssignment))) + } else { + collectRecursiveTerm(lambda.unsafe(args), value, valueFunctions, valueVariables)(using ctx) match { + case Some(addedConstraints) => Some(Some(addedConstraints, partialAssignment)) + case None => Some(None) + } + } + case SchematicFunction(label, args, value, ctx) if args.forall(isSolvableTerm(_)(using ctx.map(_._1))) => + // TODO are all bound variables already instantiated? + val valueArgs = args.map(unsafeRenameVariables(_, ctx.toMap)) + val freshArguments = freshIds(schematicTermLabelsOf(value).map(_.id), valueArgs.size).map(SchematicTermLabel.apply[0]) + // We drop the resulting arguments map (because it is not needed anymore) + val (fBody, _) = greedyFactoringFunction(value, freshArguments.zip(valueArgs).toIndexedSeq, Map.empty) + if (isTermBodyNoBoundVariables(fBody, ctx)) { + Some(Some((IndexedSeq.empty, partialAssignment + AssignedFunction.unsafe(label, LambdaFunction.unsafe(freshArguments, fBody))))) + } else { + Some(None) + } + case SchematicPredicate(label, args, value, ctx) if partialAssignment.predicates.contains(label) => + val lambda = partialAssignment.predicates(label) + if (!isFormulaBodyNoBoundVariables(lambda.body, ctx)) { + Some(None) + } else if (isSame(value, lambda.unsafe(args.map(instantiateTermPartial)))) { + Some(Some((IndexedSeq.empty, partialAssignment))) + } else { + collectRecursiveFormula(lambda.unsafe(args), value, valueFunctions, valuePredicates, valueConnectors, valueVariables)(using ctx) match { + case Some(addedConstraints) => Some(Some(addedConstraints, partialAssignment)) + case None => Some(None) + } + } + case SchematicPredicate(label, args, value, ctx) if args.forall(isSolvableTerm(_)(using ctx.map(_._1))) => + // Analogous to the above + val valueArgs = args.map(unsafeRenameVariables(_, ctx.toMap)) + val freshArguments = freshIds(schematicTermLabelsOf(value).map(_.id), valueArgs.size).map(SchematicTermLabel.apply[0]) + val (fBody, _) = greedyFactoringPredicate(value, freshArguments.zip(valueArgs).toIndexedSeq, Map.empty) + if (isFormulaBodyNoBoundVariables(fBody, ctx)) { + Some(Some((IndexedSeq.empty, partialAssignment + AssignedPredicate.unsafe(label, LambdaPredicate.unsafe(freshArguments, fBody))))) + } else { + Some(None) + } + case SchematicConnector(label, args, value, ctx) if partialAssignment.connectors.contains(label) => + val lambda = partialAssignment.connectors(label) + if (!isFormulaBodyNoBoundVariables(lambda.body, ctx)) { + Some(None) + } else if (isSame(value, lambda.unsafe(args.map(instantiateFormulaPartial)))) { + Some(Some((IndexedSeq.empty, partialAssignment))) + } else { + collectRecursiveFormula(lambda.unsafe(args), value, valueFunctions, valuePredicates, valueConnectors, valueVariables)(using ctx) match { + case Some(addedConstraints) => Some(Some(addedConstraints, partialAssignment)) + case None => Some(None) + } + } + case SchematicConnector(label, args, value, ctx) if args.forall(isSolvableFormula(_)(using ctx.map(_._1))) => + val valueArgs = args.map(unsafeRenameVariables(_, ctx.toMap)) + val freshArguments = freshIds(schematicPredicatesOf(value).map(_.id), valueArgs.size).map(SchematicPredicateLabel.apply[0]) + val (fBody, _) = greedyFactoringConnector(value, freshArguments.zip(valueArgs).toIndexedSeq, Map.empty) + if (isFormulaBodyNoBoundVariables(fBody, ctx)) { + Some(Some((IndexedSeq.empty, partialAssignment + AssignedConnector.unsafe(label, LambdaConnector.unsafe(freshArguments, fBody))))) + } else { + Some(None) + } + case Variable(pattern, value) => + if (valueVariables.contains(pattern)) { + if (pattern == value) { + Some(Some((IndexedSeq.empty, partialAssignment))) + } else { + Some(None) + } + } else if (partialAssignment.variables.forall { case (l, r) => l != pattern || r == value }) { + Some(Some((IndexedSeq.empty, partialAssignment + (pattern -> value)))) + } else { + Some(None) // Contradiction + } + case _ => None + } + constraints.view.zipWithIndex.flatMap { case (constraint, i) => + handler(constraint).map(_.map { case (newConstraints, newContext) => (newConstraints, newContext, i) }) + }.headOption match { + case Some(option) => + option match { + case Some((newConstraints, newContext, i)) => + unifyFromConstraints(constraints.take(i) ++ newConstraints ++ constraints.drop(i + 1), newContext, valueFunctions, valuePredicates, valueConnectors, valueVariables) + case None => None // Explicit error + } + case None => None // No available reduction + } + } else { + Some(partialAssignment) + } + } + + /** + * Solves a matching (one-sided unification) problem. + * The result if any is an assignment for the patterns such that they become equivalent to the values. + * An optional partial assignment can be provided to help or constraint the matching. + * Additionally, it is possible to provide other patterns. In that case, the resulting sequents + * will also include all the unmatched formulas. + * @param patterns the patterns + * @param values the value + * @param otherPatterns other patterns + * @param partialAssignment the partial assignment (or empty if none) + * @param formulaIndices the correspondence between patterns and values in the sequents + * @return an option containing the inflated other patterns and the assignment + */ + def unifyAndResolve( + patterns: IndexedSeq[PartialSequent], + values: IndexedSeq[Sequent], + otherPatterns: IndexedSeq[PartialSequent], + partialAssignment: UnificationContext, + formulaIndices: IndexedSeq[(IndexedSeq[Int], IndexedSeq[Int])] + ): Option[(IndexedSeq[Sequent], UnificationContext)] = { + + def schemasOf(sequents: IndexedSeq[SequentBase]): (Set[SchematicTermLabel[?]], Set[SchematicPredicateLabel[?]], Set[SchematicConnectorLabel[?]], Set[VariableLabel], Set[VariableLabel]) = + ( + sequents.flatMap(schematicFunctionsOfSequent).toSet, + sequents.flatMap(schematicPredicatesOfSequent).toSet, + sequents.flatMap(schematicConnectorsOfSequent).toSet, + sequents.flatMap(freeVariablesOfSequent).toSet, + sequents.flatMap(declaredBoundVariablesOfSequent).toSet + ) + + val (patternsFunctions, patternsPredicates, patternsConnectors, patternsFreeVariables, patternsBoundVariables) = + schemasOf(patterns) + val (valuesFunctions, valuesPredicates, valuesConnectors, valuesFreeVariables, valuesBoundVariables) = + schemasOf(values) + val (otherPatternsFunctions, otherPatternsPredicates, otherPatternsConnectors, otherPatternsFreeVariables, otherPatternsBoundVariables) = + schemasOf(otherPatterns) + val (partialAssignedFunctions, partialAssignedPredicates, partialAssignedConnectors) = + (partialAssignment.functions.keySet, partialAssignment.predicates.keySet, partialAssignment.connectors.keySet) + val (allPatternsFunctions, allPatternsPredicates, allPatternsConnectors, allPatternsFreeVariables, allPatternsBoundVariables) = + ( + patternsFunctions ++ otherPatternsFunctions, + patternsPredicates ++ otherPatternsPredicates, + patternsConnectors ++ otherPatternsConnectors, + patternsFreeVariables ++ otherPatternsFreeVariables, + patternsBoundVariables ++ otherPatternsBoundVariables + ) + val valuesVariables = valuesFreeVariables ++ valuesBoundVariables + val allPatternsVariables = allPatternsFreeVariables ++ allPatternsBoundVariables + + // TODO: do we need to exclude the arguments from these sets? + val allValuesFunctions = valuesFunctions ++ partialAssignment.functions.values.flatMap { f => schematicTermLabelsOf(f.body) } ++ + (partialAssignment.predicates.values ++ partialAssignment.connectors.values).flatMap { f => schematicTermLabelsOf(f.body) } + val allValuesPredicates = valuesPredicates ++ + (partialAssignment.predicates.values ++ partialAssignment.connectors.values).flatMap { f => schematicPredicatesOf(f.body) } + val allValuesConnectors = valuesConnectors ++ + (partialAssignment.predicates.values ++ partialAssignment.connectors.values).flatMap { f => schematicConnectorsOf(f.body) } + val allValuesVariables = valuesVariables ++ partialAssignment.functions.values.flatMap { f => freeVariablesOf(f.body) } ++ + (partialAssignment.predicates.values ++ partialAssignment.connectors.values).flatMap { f => freeVariablesOf(f.body) ++ declaredBoundVariablesOf(f.body) } + + val (nonUnifiableFunctions, nonUnifiablePredicates, nonUnifiableConnectors) = + (otherPatternsFunctions.diff(patternsFunctions), otherPatternsPredicates.diff(patternsPredicates), otherPatternsConnectors.diff(patternsConnectors)) + + lazy val noInvalidSizeRange = patterns.size == values.size && patterns.size == formulaIndices.size && patterns.zip(formulaIndices).zip(values).forall { + case ((PartialSequent(leftPattern, rightPattern, _, _), (leftIndices, rightIndices)), Sequent(leftValue, rightValue)) => + def check(pattern: IndexedSeq[Formula], indices: IndexedSeq[Int], value: IndexedSeq[Formula]): Boolean = + pattern.size == indices.size && indices.forall(value.indices.contains) + check(leftPattern, leftIndices, leftValue) && check(rightPattern, rightIndices, rightValue) + } + lazy val noMalformedValues = values.forall(isSequentWellFormed) + lazy val noSchematicConnectorsValues = values.flatMap(schematicConnectorsOfSequent).isEmpty + lazy val noMalformedAssignment = // TODO some of these should be a contract in `UnificationContext` + partialAssignment.functions.values.forall(lambda => isWellFormed(lambda.body)) && + partialAssignment.predicates.values.forall(lambda => isWellFormed(lambda.body) && schematicConnectorsOf(lambda.body).isEmpty) && + partialAssignment.connectors.values.forall(lambda => isWellFormed(lambda.body) && schematicConnectorsOf(lambda.body).isEmpty) + lazy val noDeclaredUnknown = + partialAssignedFunctions.subsetOf(allPatternsFunctions) && + partialAssignedPredicates.subsetOf(allPatternsPredicates) && + partialAssignedConnectors.subsetOf(allPatternsConnectors) + lazy val noUndeclaredNonUnifiable = + nonUnifiableFunctions.subsetOf(partialAssignedFunctions) && + nonUnifiablePredicates.subsetOf(partialAssignedPredicates) && + nonUnifiableConnectors.subsetOf(partialAssignedConnectors) + + val allRequirements = + isLegalPatterns(patterns) && isLegalPatterns(otherPatterns) && + noInvalidSizeRange && noMalformedValues && noSchematicConnectorsValues && noMalformedAssignment && + noDeclaredUnknown && noUndeclaredNonUnifiable + + if (allRequirements) { + // All requirements are satisfied, we can proceed + // We must rename the symbols in the pattern such that they are distinct from the ones in the values + + // All the names that are already taken (for simplicity we rename everything) + val initialTakenFunctions: Set[SchematicTermLabel[?]] = + patternsFunctions ++ otherPatternsFunctions ++ allValuesFunctions + val initialTakenPredicates: Set[SchematicPredicateLabel[?]] = + patternsPredicates ++ otherPatternsPredicates ++ allValuesPredicates + val initialTakenConnectors: Set[SchematicConnectorLabel[?]] = + patternsConnectors ++ otherPatternsConnectors ++ allValuesConnectors + val initialTakenVariables: Set[VariableLabel] = // Free and bound + patternsFreeVariables ++ patternsBoundVariables ++ otherPatternsFreeVariables ++ otherPatternsBoundVariables ++ allValuesVariables + + def freshMapping[T <: LabelType](taken: Set[T], toRename: Set[T], constructor: (T, String) => T): Map[T, T] = { + val (finalMap, _) = toRename.foldLeft((Map.empty[T, T], taken.map(_.id))) { case ((map, currentTaken), oldSymbol) => + val newName = freshId(currentTaken, oldSymbol.id) + val newSymbol = constructor(oldSymbol, newName) + (map + (oldSymbol -> newSymbol), currentTaken + newName) + } + finalMap + } + + // TODO rename variables args + + val functionsFreshMapping = freshMapping(initialTakenFunctions, allPatternsFunctions, (label, newName) => SchematicTermLabel.unsafe(newName, label.arity)) + val predicatesFreshMapping = freshMapping(initialTakenPredicates, allPatternsPredicates, (label, newName) => SchematicPredicateLabel.unsafe(newName, label.arity)) + val connectorsFreshMapping = freshMapping(initialTakenConnectors, allPatternsConnectors, (label, newName) => SchematicConnectorLabel.unsafe(newName, label.arity)) + val variablesFreshMapping = freshMapping(initialTakenVariables, allPatternsFreeVariables ++ allPatternsBoundVariables, (_, newName) => VariableLabel(newName)) + + val (functionsInverseMapping, predicatesInverseMapping, connectorsInverseMapping, variablesInverseMapping) = + (functionsFreshMapping.map(_.swap), predicatesFreshMapping.map(_.swap), connectorsFreshMapping.map(_.swap), variablesFreshMapping.map(_.swap)) + + val renamedPartialAssignment = UnificationContext( + partialAssignment.predicates.map { case (k, v) => predicatesFreshMapping.getOrElse(k, k) -> v }, + partialAssignment.functions.map { case (k, v) => functionsFreshMapping.getOrElse(k, k) -> v }, + partialAssignment.connectors.map { case (k, v) => connectorsFreshMapping.getOrElse(k, k) -> v }, + partialAssignment.variables.map { case (k, v) => variablesFreshMapping.getOrElse(k, k) -> v } + ) + + def rename(patterns: IndexedSeq[PartialSequent]): IndexedSeq[PartialSequent] = { + def renameFormula(formula: Formula): Formula = + instantiateFormulaSchemas( + unsafeRenameVariables(formula, variablesFreshMapping), + functions = functionsFreshMapping.map { case (k, v) => RenamedLabel.unsafe(k, v).toAssignment }.toSeq, + predicates = predicatesFreshMapping.map { case (k, v) => RenamedLabel.unsafe(k, v).toAssignment }.toSeq, + connectors = connectorsFreshMapping.map { case (k, v) => RenamedLabel.unsafe(k, v).toAssignment }.toSeq + ) + def renameFormulas(formulas: IndexedSeq[Formula]): IndexedSeq[Formula] = formulas.map(renameFormula) + patterns.map(p => p.copy(left = renameFormulas(p.left), right = renameFormulas(p.right))) + } + + val (renamedPatterns, renamedOtherPatterns) = (rename(patterns), rename(otherPatterns)) + + val orderedValues = values.zip(formulaIndices).flatMap { case (value, (indicesLeft, indicesRight)) => + indicesLeft.map(value.left) ++ indicesRight.map(value.right) + } + + val constraints = collect(renamedPatterns.flatMap(_.formulas).zip(orderedValues), valuesFunctions, valuesPredicates, valuesConnectors, valuesVariables) + + val unified = constraints + .flatMap(unifyFromConstraints(_, renamedPartialAssignment, allValuesFunctions, allValuesPredicates, allValuesConnectors, allValuesVariables)) + .filter(assignment => // Check if the assignment is full (should this be an assertion?) + assignment.functions.keySet.map(functionsInverseMapping) == allPatternsFunctions && + assignment.predicates.keySet.map(predicatesInverseMapping) == allPatternsPredicates && + assignment.connectors.keySet.map(connectorsInverseMapping) == allPatternsConnectors && + assignment.variables.keySet.map(variablesInverseMapping) == allPatternsVariables + ) + + unified.map { renamedAssignment => + val assignment = UnificationContext( + renamedAssignment.predicates.map { case (k, v) => predicatesInverseMapping.getOrElse(k, k) -> v }, + renamedAssignment.functions.map { case (k, v) => functionsInverseMapping.getOrElse(k, k) -> v }, + renamedAssignment.connectors.map { case (k, v) => connectorsInverseMapping.getOrElse(k, k) -> v }, + renamedAssignment.variables.map { case (k, v) => variablesInverseMapping.getOrElse(k, k) -> v } + ) + + // Union all + val otherValues = inflateValues(renamedOtherPatterns, values, renamedAssignment, formulaIndices) + + (otherValues, assignment) + } + } else { + None + } + } + + type SequentSelector = (IndexedSeq[Int], IndexedSeq[Int]) + + /** + * A helper method designed to enumerate all possible correspondences between patterns and values. + * @param map an optional partial correspondence (or empty if none) + * @param patterns the patterns + * @param values the values + * @return a lazy list of correspondences + */ + def matchIndices(map: Map[Int, SequentSelector], patterns: IndexedSeq[PartialSequent], values: IndexedSeq[Sequent]): View[IndexedSeq[SequentSelector]] = { + require(patterns.size == values.size) + // Normally `pattern` shouldn't be empty, but this function works regardless + if (map.keySet.forall(patterns.indices.contains)) { + val selectors = patterns.indices.map(map.getOrElse(_, (IndexedSeq.empty, IndexedSeq.empty))) + selectors + .zip(patterns.zip(values)) + .map { case ((leftSelector, rightSelector), (pattern, value)) => + def enumerate(selectorSide: IndexedSeq[Int], patternSideSize: Int, isPatternPartial: Boolean, valueSide: Range): View[IndexedSeq[Int]] = { + // TODO remove the partial parameter as it is not needed in this direction + if (selectorSide.isEmpty) { // If empty we consider all permutations + // If `valueSide` is empty then it will produce an empty array + valueSide.combinations(patternSideSize).flatMap(_.permutations).toSeq.view + } else { + if (selectorSide.size == patternSideSize) { + if (selectorSide.forall(valueSide.contains)) { + // We return exactly what was selected + View(selectorSide) + } else { + // An index value is out of range + View.empty + } + } else { + // Number of args does not match the pattern's + View.empty + } + } + } + val leftSide = enumerate(leftSelector, pattern.left.size, pattern.partialLeft, value.left.indices) + val rightSide = enumerate(rightSelector, pattern.right.size, pattern.partialRight, value.right.indices) + for { + l <- leftSide + r <- rightSide + } yield IndexedSeq((l, r)) + } + .fold(View(IndexedSeq.empty[(IndexedSeq[Int], IndexedSeq[Int])])) { case (v1, v2) => + for { + first <- v1 + second <- v2 + } yield first ++ second + } + } else { + // Map contains values outside the range + View.empty + } + } + +} diff --git a/lisa-front/src/main/scala/lisa/front/theory/SetTheory.scala b/lisa-front/src/main/scala/lisa/front/theory/SetTheory.scala new file mode 100644 index 0000000000000000000000000000000000000000..27b1b31d5ed4b17f838bedbf980ec94589686481 --- /dev/null +++ b/lisa-front/src/main/scala/lisa/front/theory/SetTheory.scala @@ -0,0 +1,52 @@ +package lisa.front.theory + +import lisa.front.fol.FOL.* +import lisa.kernel.proof.RunningTheory +import lisa.settheory.AxiomaticSetTheory + +/** + * The set theory package. See [[lisa.settheory.AxiomaticSetTheory]]. + */ +object SetTheory { + + // The purpose of this file is simply to lift the definitions from the kernel to the front + + /** + * A safe type representing a formula that is considered as an axiom in this theory. + */ + opaque type AxiomaticFormula <: Formula = Formula + + val membership: ConstantPredicateLabel[2] = fromKernel(AxiomaticSetTheory.in).asInstanceOf[ConstantPredicateLabel[2]] + val subset: ConstantPredicateLabel[2] = fromKernel(AxiomaticSetTheory.subset).asInstanceOf[ConstantPredicateLabel[2]] + val sameCardinality: ConstantPredicateLabel[2] = fromKernel(AxiomaticSetTheory.in).asInstanceOf[ConstantPredicateLabel[2]] + + val emptySet: ConstantFunctionLabel[0] = fromKernel(AxiomaticSetTheory.emptySet).asInstanceOf[ConstantFunctionLabel[0]] + val unorderedPairSet: ConstantFunctionLabel[2] = fromKernel(AxiomaticSetTheory.pair).asInstanceOf[ConstantFunctionLabel[2]] + // val singletonSet: ConstantFunctionLabel[1] = fromKernel(AxiomaticSetTheory.singleton).asInstanceOf[ConstantFunctionLabel[1]] + val powerSet: ConstantFunctionLabel[1] = fromKernel(AxiomaticSetTheory.powerSet).asInstanceOf[ConstantFunctionLabel[1]] + val unionSet: ConstantFunctionLabel[1] = fromKernel(AxiomaticSetTheory.union).asInstanceOf[ConstantFunctionLabel[1]] + val universeSet: ConstantFunctionLabel[1] = fromKernel(AxiomaticSetTheory.universe).asInstanceOf[ConstantFunctionLabel[1]] + + val axiomEmpty: AxiomaticFormula = fromKernel(AxiomaticSetTheory.emptySetAxiom) + val axiomExtensionality: AxiomaticFormula = fromKernel(AxiomaticSetTheory.extensionalityAxiom) + val axiomPair: AxiomaticFormula = fromKernel(AxiomaticSetTheory.pairAxiom) + val axiomUnion: AxiomaticFormula = fromKernel(AxiomaticSetTheory.unionAxiom) + val axiomPower: AxiomaticFormula = fromKernel(AxiomaticSetTheory.powerAxiom) + val axiomFoundation: AxiomaticFormula = fromKernel(AxiomaticSetTheory.foundationAxiom) + + val axiomSchemaReplacement: AxiomaticFormula = fromKernel(AxiomaticSetTheory.replacementSchema) + + val axiomTarski: AxiomaticFormula = fromKernel(AxiomaticSetTheory.tarskiAxiom) + + val definitionSubset: AxiomaticFormula = { + val (x, y, z) = (VariableLabel("x"), VariableLabel("y"), VariableLabel("z")) + forall(x, forall(y, subset(x, y) <=> forall(z, (z in x) ==> (z in y)))) + } + + extension (term: Term) { + infix def in(other: Term): Formula = membership(term, other) + def subsetOf(other: Term): Formula = subset(term, other) + infix def ~(other: Term): Formula = sameCardinality(term, other) + } + +} diff --git a/lisa-front/src/test/scala/lisa/front/FrontMacroTests.scala b/lisa-front/src/test/scala/lisa/front/FrontMacroTests.scala new file mode 100644 index 0000000000000000000000000000000000000000..0c50fac48d65d3ecb05064411b76b7ac0d66c3dd --- /dev/null +++ b/lisa-front/src/test/scala/lisa/front/FrontMacroTests.scala @@ -0,0 +1,32 @@ +package lisa.front + +import lisa.front.{_, given} +import org.scalatest.funsuite.AnyFunSuite + +import scala.language.adhocExtensions + +class FrontMacroTests extends AnyFunSuite { + // TODO Front macros are not working du to changes to variables. + /* + test("string interpolation macros") { + term"g(x, y)" + formula"a /\ b \/ c => d" + sequent"a; b |- c" + partial"a |- b; ..." + + val p = ConstantPredicateLabel[2]("p") + assert(formula"$p(x, y)".toString == "p(x, y)") + + val f = SchematicTermLabel[2]("f") + val y0:Term = SchematicTermLabel[0]("y")() + term"$y0" + assert(term"{$f(x, $y0)}".toString == "{?f(x, ?y)}") + assert(formula"{} = {$f(x, $y0)}".toString == "? = {?f(x, ?y)}") + + val p0 = ConstantPredicateLabel[0]("p") + val v = VariableLabel("x") + assert(sequent" |- $p0".toString == "? p") + assert(partial"\ $v. $v = {}; f($y0) |- $p0 /\ b; ...".toString == raw"\x. f(?y); x = ? ? p ? b; ?") + } + */ +} diff --git a/lisa-front/src/test/scala/lisa/front/ParserPrinterTests.scala b/lisa-front/src/test/scala/lisa/front/ParserPrinterTests.scala new file mode 100644 index 0000000000000000000000000000000000000000..f915ebc0273616a564d26a949e211fcce1b7a51b --- /dev/null +++ b/lisa-front/src/test/scala/lisa/front/ParserPrinterTests.scala @@ -0,0 +1,68 @@ +package lisa.front + +import lisa.front.parser.FrontReader +import lisa.front.printer.FrontPositionedPrinter +import lisa.front.printer.FrontPrintStyle +import org.scalatest.funsuite.AnyFunSuite + +import scala.language.adhocExtensions + +class ParserPrinterTests extends AnyFunSuite { + test("formula parsing and printing (ASCII)") { + Seq[String]( + "a", + raw"a /\ b", + raw"a /\ b \/ c => d <=> e", + "a => b => c => d", + "((a => b) => c) => d", + "forall x. ?x = ?z", + "f(a, b)", + raw"(a \/ b) /\ c", + raw"forall x, y. (?x = ?z) /\ (?x = ?y)", + raw"??f(g({?x, {{}, ?y}}, {}), a, a /\ b)", + "?s", + "?f(?s, ?g(?x), t) = ?u", + "exists x. forall y. ?x = ?y" + ).foreach { s => + val formula = FrontReader.readFormula(s) + val printed = FrontPositionedPrinter.prettyFormula(formula, symbols = FrontPrintStyle.Ascii) + println(printed) + assert(printed == s) // actual == expected + } + } + + test("sequent parsing and printing (ASCII)") { + Seq[String]( + "|-", + "|- a", + "a |- b", + "a; b |- c; d", + raw"a /\ b; c \/ d |- e; f => g; h" + ).foreach { s => + val sequent = FrontReader.readSequent(s) + val printed = FrontPositionedPrinter.prettySequent(sequent, symbols = FrontPrintStyle.Ascii) + // println(printed) + assert(printed == s) + } + } + + test("partial sequent parsing and printing (ASCII)") { + Seq[String]( + "|-", + "|- a", + "a |- b", + "a; b |- c; d", + "... |- a; b", + "... |- ...", + "a |- b; ...", + "...; a |- b", + "...; a |- b; ...", + "...; a; b |- b; c; ..." + ).foreach { s => + val sequent = FrontReader.readPartialSequent(s) + val printed = FrontPositionedPrinter.prettyPartialSequent(sequent, symbols = FrontPrintStyle.Ascii) + // println(printed) + assert(printed == s) + } + } +} diff --git a/lisa-front/src/test/scala/lisa/front/UnificationTests.scala b/lisa-front/src/test/scala/lisa/front/UnificationTests.scala new file mode 100644 index 0000000000000000000000000000000000000000..26103982bae86ac4eb44d61a3d3445290c2c168e --- /dev/null +++ b/lisa-front/src/test/scala/lisa/front/UnificationTests.scala @@ -0,0 +1,256 @@ +package lisa.front + +import lisa.front.fol.FOL.LabelType +import lisa.front.fol.FOL.WithArityType +import lisa.front.printer.FrontPositionedPrinter +import lisa.front.printer.FrontPrintStyle +import lisa.front.{_, given} +import org.scalatest.Ignore +import org.scalatest.funsuite.AnyFunSuite + +import scala.language.adhocExtensions + +@Ignore +class UnificationTests extends AnyFunSuite { + + val (sa, sb, sc) = (SchematicPredicateLabel[0]("a"), SchematicPredicateLabel[0]("b"), SchematicPredicateLabel[0]("c")) + val (a, b, c) = (ConstantPredicateLabel[0]("a"), ConstantPredicateLabel[0]("b"), ConstantPredicateLabel[0]("c")) + + val (st, su, sv) = (SchematicTermLabel[0]("t"), SchematicTermLabel[0]("u"), SchematicTermLabel[0]("v")) + val (t, u, v) = (ConstantFunctionLabel[0]("t"), ConstantFunctionLabel[0]("u"), ConstantFunctionLabel[0]("v")) + + val (sf1, f1) = (SchematicTermLabel[1]("f1"), ConstantFunctionLabel[1]("f1")) + val (sg1) = (SchematicConnectorLabel[1]("g1")) + val (sp1, p1) = (SchematicPredicateLabel[1]("p1"), ConstantPredicateLabel[1]("p1")) + + val (x, y, z) = (VariableLabel("x"), VariableLabel("y"), VariableLabel("z")) + + def unify(pattern: Formula, target: Formula, partial: UnificationContext = emptyContext): Option[(IndexedSeq[Sequent], UnificationContext)] = + unifyAndResolve( + IndexedSeq(PartialSequent(IndexedSeq(pattern), IndexedSeq.empty)), + IndexedSeq(Sequent(IndexedSeq(target), IndexedSeq.empty)), + IndexedSeq.empty, + partial, + IndexedSeq((IndexedSeq(0), IndexedSeq.empty)) + ) + + @deprecated + def checkUnifies(pattern: Formula, target: Formula, partial: UnificationContext = emptyContext): Unit = { + assert( + unify(pattern, target, partial).nonEmpty, + s"Pattern ${FrontPositionedPrinter.prettyFormula(pattern, symbols = FrontPrintStyle.Unicode)} and " + + s"target ${FrontPositionedPrinter.prettyFormula(target, symbols = FrontPrintStyle.Unicode)} did not unify" + ) + } + + def checkDoesNotUnify(pattern: Formula, target: Formula, partial: UnificationContext = emptyContext): Unit = { + assert( + unify(pattern, target, partial).isEmpty, + s"Pattern ${FrontPositionedPrinter.prettyFormula(pattern, symbols = FrontPrintStyle.Unicode)} and " + + s"target ${FrontPositionedPrinter.prettyFormula(target, symbols = FrontPrintStyle.Unicode)} did unify" + ) + } + + def contextsEqual(ctx1: UnificationContext, ctx2: UnificationContext): Boolean = { + def names(lambda: WithArityType[?]): Seq[String] = (0 until lambda.arity).map(i => s"unique_name_$i") + def normalizeFunction[N <: Arity](lambda: LambdaFunction[N]) = { + val newParams = names(lambda).map(SchematicTermLabel.apply[0]) + LambdaFunction.unsafe( + newParams, + instantiateTermSchemas(lambda.body, lambda.parameters.zip(newParams).map { case (l1, l2) => RenamedLabel.unsafe(l1, l2).toAssignment }) + ) + } + // val x:Nothing = normalizeFunction + def normalizePredicate(lambda: LambdaPredicate[?]): LambdaPredicate[?] = { + val newParams = names(lambda).map(SchematicTermLabel.apply[0]) + LambdaPredicate.unsafe( + newParams, + instantiateFormulaSchemas(lambda.body, lambda.parameters.zip(newParams).map { case (l1, l2) => RenamedLabel.unsafe(l1, l2).toAssignment }, Seq.empty, Seq.empty) + ) + } + def normalizeConnector(lambda: LambdaConnector[?]): LambdaConnector[?] = { + val newParams = names(lambda).map(SchematicPredicateLabel.apply[0]) + LambdaConnector.unsafe( + newParams, + instantiateFormulaSchemas(lambda.body, Seq.empty, lambda.parameters.zip(newParams).map { case (l1, l2) => RenamedLabel.unsafe(l1, l2).toAssignment }, Seq.empty) + ) + } + def normalizeContext(ctx: UnificationContext): UnificationContext = + UnificationContext( + ctx.predicates.view.mapValues(normalizePredicate).toMap, + ctx.functions.view.mapValues(normalizeFunction).toMap, + ctx.connectors.view.mapValues(normalizeConnector).toMap, + ctx.variables + ) + normalizeContext(ctx1) == normalizeContext(ctx2) + } + + def checkUnifiesAs(pattern: Formula, target: Formula, partial: UnificationContext)(expected: UnificationContext): Unit = { + unify(pattern, target, partial) match { + case Some((_, resultCtx)) => assert(contextsEqual(resultCtx, expected), resultCtx) + case None => + fail( + s"Pattern ${FrontPositionedPrinter.prettyFormula(pattern, symbols = FrontPrintStyle.Unicode)} and " + + s"target ${FrontPositionedPrinter.prettyFormula(target, symbols = FrontPrintStyle.Unicode)} did not unify" + ) + } + } + + def checkUnifiesAs(pattern: Formula, target: Formula)(expected: UnificationContext): Unit = + checkUnifiesAs(pattern, target, emptyContext)(expected) + + val emptyContext: UnificationContext = UnificationContext() + val emptyResult: UnificationContext = UnificationContext() + + test("same boolean constant") { + checkUnifiesAs(a, a)(emptyResult) + } + test("different boolean constants") { + checkDoesNotUnify(a, b) + } + test("boolean and schematic constants") { + checkDoesNotUnify(a, sa) + } + test("boolean constant expression") { + checkUnifiesAs(a /\ b, a /\ b)(emptyResult) + } + + test("schematic boolean constant") { + checkUnifiesAs(sa, a)(emptyResult + AssignedPredicate(sa, a)) + } + test("same schematic boolean constant") { + checkUnifiesAs(sa, sa)(emptyResult + AssignedPredicate(sa, sa)) + } + test("expression with schematic boolean constant") { + checkUnifiesAs(sa /\ b, a /\ b)(emptyResult + AssignedPredicate(sa, a)) + } + test("expression with multiple schematic boolean constants") { + checkUnifiesAs(sa /\ sb, a /\ b)(emptyResult + AssignedPredicate(sa, a) + AssignedPredicate(sb, b)) + } + test("matching two schematic boolean constants") { + checkUnifiesAs(sa /\ b, sb /\ b)(emptyResult + AssignedPredicate(sa, sb)) + } + test("schematic boolean constants match expressions") { + checkUnifiesAs(sa /\ sb, (a ==> b) /\ (c \/ a))(emptyResult + AssignedPredicate(sa, a ==> b) + AssignedPredicate(sb, c \/ a)) + } + test("schematic predicate matches same expressions") { + checkUnifiesAs(sa /\ sa, b /\ b)(emptyResult + AssignedPredicate(sa, b)) + } + test("schematic predicate matches equivalent expressions") { + checkUnifiesAs(sa /\ sa, (a \/ b) /\ (b \/ a))(emptyResult + AssignedPredicate(sa, a \/ b)) + } + test("schematic predicate does not match different constants") { + checkDoesNotUnify(sa /\ sa, a /\ b) + } + test("schematic 0-ary predicate: equivalent expression in the context") { + checkUnifiesAs(sa, a /\ b, emptyContext + AssignedPredicate(sa, b /\ a))( + emptyResult + AssignedPredicate(sa, b /\ a) + ) + } + test("schematic 0-ary predicate with partial mapping") { + checkUnifiesAs(sa, a, emptyContext + AssignedPredicate(sa, a))(emptyResult + AssignedPredicate(sa, a)) + } + test("schematic 0-ary predicate with contradicting partial mapping") { + checkDoesNotUnify(sa, a, emptyContext + AssignedPredicate(sa, b)) + } + test("schematic 0-ary predicate with contradicting partial mapping 2") { + checkDoesNotUnify(sa, a, emptyContext + AssignedPredicate(sb, a)) + } + + test("same predicate") { + checkUnifiesAs(p1(t), p1(t))(emptyResult) + } + test("predicate of schematic variable to predicate of constant") { + checkUnifiesAs(p1(st), p1(u))(emptyResult + AssignedFunction(st, u)) + } + test("schematic to constant predicate") { + checkUnifiesAs(sp1(t), p1(t))(emptyResult + AssignedPredicate(sp1, LambdaPredicate(x => p1(x)))) + } + test("schematic predicate to expression") { + checkUnifiesAs(sp1(t), p1(t) /\ p1(u))(emptyResult + AssignedPredicate(sp1, LambdaPredicate(x => p1(x) /\ p1(u)))) + } + test("schematic connector of boolean constant to boolean constant") { + checkUnifiesAs(sg1(a), b)(emptyResult + AssignedConnector(sg1, LambdaConnector(_ => b))) + } + test("schematic connector of schematic boolean constant does not match boolean constant") { + checkDoesNotUnify(sg1(sa), a) + } + test("schematic connector of schematic boolean constant with partial mapping of connector") { + checkUnifiesAs(sg1(sa), b, emptyContext + AssignedConnector(sg1, LambdaConnector(x => x)))( + emptyResult + + AssignedConnector(sg1, LambdaConnector(x => x)) + AssignedPredicate(sa, b) + ) + } + test("schematic connector of schematic boolean constant with partial mapping of 0-ary predicate") { + checkUnifiesAs(sg1(sa), b, emptyContext + AssignedPredicate(sa, b))( + emptyResult + AssignedPredicate(sa, b) + + AssignedConnector(sg1, LambdaConnector(x => x)) + ) + } + test("schematic connector of schematic boolean constant with partial mapping of connector and 0-ary predicate") { + checkUnifiesAs(sg1(sa), b, emptyContext + AssignedConnector(sg1, LambdaConnector(x => x)) + AssignedPredicate(sa, b))( + emptyContext + AssignedConnector(sg1, LambdaConnector(x => x)) + AssignedPredicate(sa, b) + ) + } + test("schematic connector of schematic boolean constant with partial mapping of connector and different 0-ary predicate") { + checkDoesNotUnify(sg1(sa), b, emptyContext + AssignedConnector(sg1, LambdaConnector(x => x)) + AssignedPredicate(sa, a)) + } + test("schematic connector of schematic boolean constant with partial mapping of 0-ary predicate and different connector") { + checkDoesNotUnify(sg1(sa), b, emptyContext + AssignedConnector(sg1, LambdaConnector(_ => a)) + AssignedPredicate(sa, b)) + } + test("predicate of a variable") { + checkUnifiesAs(p1(x), p1(x))(emptyResult + (x -> x)) + } + test("predicate of a variable to a predicate of a different variable") { + checkUnifiesAs(p1(x), p1(y))(emptyResult + (x -> y)) + } + test("predicate of a variable to a predicate of a different variable with partial mapping of variables") { + checkUnifiesAs(p1(x), p1(y), emptyContext + (x -> y))(emptyResult + (x -> y)) + } + test("predicate of a variable to a predicate of a different variable with incompatible partial mapping of variables") { + checkDoesNotUnify(p1(x), p1(y), emptyContext + (x -> z)) + } + + test("exists constant") { + checkUnifiesAs(exists(x, a), exists(x, a))(emptyResult + (x -> x)) + } + test("exists constant with different bound variables") { + checkUnifiesAs(exists(x, a), exists(y, a))(emptyResult + (x -> y)) + } + test("exists expression with different bound variables") { + checkUnifiesAs(exists(x, p1(x)), exists(y, p1(y)))(emptyResult + (x -> y)) + } + test("exists expression of bound vs free variable") { + checkDoesNotUnify(exists(x, p1(x)), exists(y, p1(z))) + } + test("exists schematic predicate to exists expression") { + checkUnifiesAs(exists(x, sp1(x)), exists(y, p1(y) /\ a))(emptyResult + (x -> y) + AssignedPredicate(sp1, LambdaPredicate(v => p1(v) /\ a))) + } + test("exists schematic boolean constant to exists predicate") { + checkUnifiesAs(exists(x, sa), exists(x, p1(t)))(emptyResult + (x -> x) + AssignedPredicate(sa, p1(t))) + } + test("exists schematic boolean constant to exists unary predicate (does not unify)") { + checkDoesNotUnify(exists(x, sa), exists(x, p1(x))) + } + test("exists schematic unary predicate to exists unary predicate with incompatible predicate mapping") { + checkDoesNotUnify(exists(x, sp1(x)), exists(x, p1(x)), emptyContext + (x -> x) + AssignedPredicate(sp1, LambdaPredicate(_ => p1(x)))) + } + test("exists expression: switch bound and free variable") { + checkUnifiesAs(exists(x, exists(y, p1(x) /\ p1(y))), exists(y, exists(x, p1(y) /\ p1(x))))( + emptyResult + (x -> y) + (y -> x) + ) + } + + test("term with different schematic variables to a term with the same variable") { + checkUnifiesAs(p1(st) /\ p1(su), p1(x) /\ p1(x))(emptyResult + AssignedFunction(st, x) + AssignedFunction(su, x)) + } + test("term with same schematic variable to a term with different variables (does not unify)") { + checkDoesNotUnify(p1(st) /\ p1(st), p1(x) /\ p1(y)) + } + test("term with same schematic variable to a term with different schematic variables (does not unify)") { + checkDoesNotUnify(p1(st) /\ p1(st), p1(su) /\ p1(sv)) + } + test("rename schematic variable") { + checkUnifiesAs(p1(st) /\ p1(st), p1(su) /\ p1(su))(emptyResult + AssignedFunction(st, su)) + } +} diff --git a/lisa-kernel/src/main/scala/lisa/kernel/fol/CommonDefinitions.scala b/lisa-kernel/src/main/scala/lisa/kernel/fol/CommonDefinitions.scala index 1faa2a2089790ded35da0cb57ae4d7a02c5e4456..c936b33a797cd0be760e2b9350b68c6a16b8ecb1 100644 --- a/lisa-kernel/src/main/scala/lisa/kernel/fol/CommonDefinitions.scala +++ b/lisa-kernel/src/main/scala/lisa/kernel/fol/CommonDefinitions.scala @@ -7,7 +7,7 @@ private[fol] trait CommonDefinitions { val MaxArity: Int = 1000000 /** - * An labelled node for tree-like structures. + * A labelled node for tree-like structures. */ protected trait Label { val arity: Int @@ -15,16 +15,32 @@ private[fol] trait CommonDefinitions { } /** - * Constant label can represent a symbol of a theory + * \begin{lstlisting}[language=Scala, frame=single] + * object MyTheoryName extends lisa.Main { + * + * THEOREM("theoremName") of "desired conclusion" PROOF { + * + * ... : Proof + * + * } using (listOfJustifications) + * show + * } + * \end{lstlisting} + * A constant label can represent a fixed symbol of a theory or a logical symbol */ trait ConstantLabel extends Label /** - * Schematic label in a formula can be substituted by any constant label of the respective - * kind (predicate or function) + * A schematic label in a formula or a term can be substituted by any formula or term of the adequate kind. */ trait SchematicLabel extends Label + /** + * return am identifier that is different from a set of give identifier. + * @param taken ids which are not available + * @param base prefix of the new id + * @return a fresh id. + */ def freshId(taken: Set[String], base: String): String = { var i = 0; while (taken contains base + "_" + i) i += 1 diff --git a/lisa-kernel/src/main/scala/lisa/kernel/fol/EquivalenceChecker.scala b/lisa-kernel/src/main/scala/lisa/kernel/fol/EquivalenceChecker.scala index def0f7fa2c336179391cb2fabf0bba49a24d59eb..7949c8150c7ad2a35ccbf875d9c2eaef3182af19 100644 --- a/lisa-kernel/src/main/scala/lisa/kernel/fol/EquivalenceChecker.scala +++ b/lisa-kernel/src/main/scala/lisa/kernel/fol/EquivalenceChecker.scala @@ -12,6 +12,7 @@ import scala.math.Numeric.IntIsIntegral * For soundness, this relation should always be a subrelation of the usual FOL implication. * The implementation checks for Orthocomplemented Bismeilatices equivalence, plus symetry and reflexivity * of equality and alpha-equivalence. + * See https://github.com/epfl-lara/OCBSL for more informations */ private[fol] trait EquivalenceChecker extends FormulaDefinitions { sealed abstract class SimpleFormula { @@ -21,6 +22,9 @@ private[fol] trait EquivalenceChecker extends FormulaDefinitions { case class SimplePredicate(id: PredicateLabel, args: List[Term]) extends SimpleFormula { val size = 1 } + case class SimpleConnector(id: ConnectorLabel, args: List[SimpleFormula]) extends SimpleFormula { + val size = 1 + } case class SNeg(child: SimpleFormula) extends SimpleFormula { val size: Int = 1 + child.size } @@ -41,6 +45,7 @@ private[fol] trait EquivalenceChecker extends FormulaDefinitions { val code: Int } case class NormalPredicate(id: PredicateLabel, args: List[Term], code: Int) extends NormalFormula + case class NormalConnector(id: ConnectorLabel, args: List[NormalFormula], code: Int) extends NormalFormula case class NNeg(child: NormalFormula, code: Int) extends NormalFormula case class NOr(children: List[NormalFormula], code: Int) extends NormalFormula case class NForall(x: String, inner: NormalFormula, code: Int) extends NormalFormula @@ -76,6 +81,7 @@ private[fol] trait EquivalenceChecker extends FormulaDefinitions { case And => SNeg(SOr(args.map(c => SNeg(removeSugar(c))).toList)) case Or => SOr((args map removeSugar).toList) + case _ => SimpleConnector(label, args.toList.map(removeSugar)) } case BinderFormula(label, bound, inner) => label match { @@ -94,14 +100,15 @@ private[fol] trait EquivalenceChecker extends FormulaDefinitions { ) def toLocallyNameless(t: Term, subst: Map[VariableLabel, Int], i: Int): Term = t match { - case VariableTerm(label) => + case Term(label: VariableLabel, _) => if (subst.contains(label)) VariableTerm(VariableLabel("x" + (i - subst(label)))) else VariableTerm(VariableLabel("_" + label)) - case FunctionTerm(label, args) => FunctionTerm(label, args.map(c => toLocallyNameless(c, subst, i))) + case Term(label, args) => Term(label, args.map(c => toLocallyNameless(c, subst, i))) } def toLocallyNameless(phi: SimpleFormula, subst: Map[VariableLabel, Int], i: Int): SimpleFormula = phi match { case SimplePredicate(id, args) => SimplePredicate(id, args.map(c => toLocallyNameless(c, subst, i))) + case SimpleConnector(id, args) => SimpleConnector(id, args.map(f => toLocallyNameless(f, subst, i))) case SNeg(child) => SNeg(toLocallyNameless(child, subst, i)) case SOr(children) => SOr(children.map(toLocallyNameless(_, subst, i))) case SForall(x, inner) => SForall("", toLocallyNameless(inner, subst + (VariableLabel(x) -> i), i + 1)) @@ -119,9 +126,9 @@ private[fol] trait EquivalenceChecker extends FormulaDefinitions { def codesOfTerm(t: Term): Int = codesTerms.getOrElseUpdate( t, t match { - case VariableTerm(label) => + case Term(label: VariableLabel, _) => codesSigTerms.getOrElseUpdate((label, Nil), codesSigTerms.size) - case FunctionTerm(label, args) => + case Term(label, args) => val c = args map codesOfTerm codesSigTerms.getOrElseUpdate((label, c), codesSigTerms.size) @@ -195,7 +202,11 @@ private[fol] trait EquivalenceChecker extends FormulaDefinitions { val r: List[NormalFormula] = phi match { case SimplePredicate(id, args) => val lab = "pred_" + id.id + "_" + id.arity - if (id == equality) { + if (id == top) { + phi.normalForm = Some(NLiteral(true)) + } else if (id == bot) { + phi.normalForm = Some(NLiteral(false)) + } else if (id == equality) { if (codesOfTerm(args(0)) == codesOfTerm(args(1))) phi.normalForm = Some(NLiteral(true)) else @@ -204,6 +215,10 @@ private[fol] trait EquivalenceChecker extends FormulaDefinitions { phi.normalForm = Some(NormalPredicate(id, args, updateCodesSig((lab, args map codesOfTerm)))) } phi.normalForm.get :: acc + case SimpleConnector(id, args) => + val lab = "conn_" + id.id + "_" + id.arity + phi.normalForm = Some(NormalConnector(id, args.map(_.normalForm.get), updateCodesSig((lab, args map OCBSLCode)))) + phi.normalForm.get :: acc case SNeg(child) => pNeg(child, phi, acc) case SOr(children) => children.foldLeft(acc)((p, a) => pDisj(a, p)) case SForall(x, inner) => @@ -229,7 +244,13 @@ private[fol] trait EquivalenceChecker extends FormulaDefinitions { val r: List[NormalFormula] = phi match { case SimplePredicate(id, args) => val lab = "pred_" + id.id + "_" + id.arity - if (id == equality) { + if (id == top) { + phi.normalForm = Some(NLiteral(true)) + parent.normalForm = Some(NLiteral(false)) + } else if (id == bot) { + phi.normalForm = Some(NLiteral(false)) + parent.normalForm = Some(NLiteral(true)) + } else if (id == equality) { if (codesOfTerm(args(0)) == codesOfTerm(args(1))) { phi.normalForm = Some(NLiteral(true)) parent.normalForm = Some(NLiteral(false)) @@ -240,8 +261,14 @@ private[fol] trait EquivalenceChecker extends FormulaDefinitions { } else { phi.normalForm = Some(NormalPredicate(id, args, updateCodesSig((lab, args map codesOfTerm)))) parent.normalForm = Some(NNeg(phi.normalForm.get, updateCodesSig(("neg", List(phi.normalForm.get.code))))) + // phi.normalForm = Some(NormalPredicate(id, args, updateCodesSig((lab, args map codesOfTerm)))) } parent.normalForm.get :: acc + case SimpleConnector(id, args) => + val lab = "conn_" + id.id + "_" + id.arity + phi.normalForm = Some(NormalConnector(id, args.map(_.normalForm.get), updateCodesSig((lab, args map OCBSLCode)))) + parent.normalForm = Some(NNeg(phi.normalForm.get, updateCodesSig(("neg", List(phi.normalForm.get.code))))) + parent.normalForm.get :: acc case SNeg(child) => pDisj(child, acc) case SForall(x, inner) => val r = OCBSLCode(inner) diff --git a/lisa-kernel/src/main/scala/lisa/kernel/fol/FormulaDefinitions.scala b/lisa-kernel/src/main/scala/lisa/kernel/fol/FormulaDefinitions.scala index 843211081e4f17505c58867fafebd95cfc349a70..81247cb99d917424b2490ba38bfd265e29bce580 100644 --- a/lisa-kernel/src/main/scala/lisa/kernel/fol/FormulaDefinitions.scala +++ b/lisa-kernel/src/main/scala/lisa/kernel/fol/FormulaDefinitions.scala @@ -10,13 +10,34 @@ private[fol] trait FormulaDefinitions extends FormulaLabelDefinitions with TermD * The parent class of formulas. * A formula is a tree whose nodes are either terms or labeled by predicates or logical connectors. */ - sealed abstract class Formula extends TreeWithLabel[FormulaLabel] { + sealed trait Formula extends TreeWithLabel[FormulaLabel] { + val arity: Int = label.arity + override def constantTermLabels: Set[ConstantFunctionLabel] + override def schematicTermLabels: Set[SchematicTermLabel] + override def freeSchematicTermLabels: Set[SchematicTermLabel] + override def freeVariables: Set[VariableLabel] - def constantFunctions: Set[ConstantFunctionLabel] - def schematicTerms: Set[SchematicTermLabel] + /** + * @return The list of constant predicate symbols in the formula. + */ + def constantPredicateLabels: Set[ConstantPredicateLabel] + + /** + * @return The list of schematic predicate symbols in the formula, including variable formulas . + */ + def schematicPredicateLabels: Set[SchematicVarOrPredLabel] + + /** + * @return The list of schematic connector symbols in the formula. + */ + def schematicConnectorLabels: Set[SchematicConnectorLabel] + + /** + * @return The list of schematic connector, predicate and formula variable symbols in the formula. + */ + def schematicFormulaLabels: Set[SchematicFormulaLabel] = + (schematicPredicateLabels.toSet: Set[SchematicFormulaLabel]) union (schematicConnectorLabels.toSet: Set[SchematicFormulaLabel]) - def constantPredicates: Set[ConstantPredicateLabel] - def schematicPredicates: Set[SchematicPredicateLabel] } /** @@ -24,48 +45,68 @@ private[fol] trait FormulaDefinitions extends FormulaLabelDefinitions with TermD */ sealed case class PredicateFormula(label: PredicateLabel, args: Seq[Term]) extends Formula { require(label.arity == args.size) - override def freeVariables: Set[VariableLabel] = args.foldLeft(Set.empty[VariableLabel])((prev, next) => prev union next.freeVariables) - - override def constantPredicates: Set[ConstantPredicateLabel] = label match { + override def constantTermLabels: Set[ConstantFunctionLabel] = + args.foldLeft(Set.empty[ConstantFunctionLabel])((prev, next) => prev union next.constantTermLabels) + override def schematicTermLabels: Set[SchematicTermLabel] = + args.foldLeft(Set.empty[SchematicTermLabel])((prev, next) => prev union next.schematicTermLabels) + override def freeSchematicTermLabels: Set[SchematicTermLabel] = + args.foldLeft(Set.empty[SchematicTermLabel])((prev, next) => prev union next.freeSchematicTermLabels) + override def freeVariables: Set[VariableLabel] = + args.foldLeft(Set.empty[VariableLabel])((prev, next) => prev union next.freeVariables) + override def constantPredicateLabels: Set[ConstantPredicateLabel] = label match { case l: ConstantPredicateLabel => Set(l) - case l: SchematicPredicateLabel => Set() + case _ => Set() } - override def schematicPredicates: Set[SchematicPredicateLabel] = label match { - case l: ConstantPredicateLabel => Set() - case l: SchematicPredicateLabel => Set(l) + override def schematicPredicateLabels: Set[SchematicVarOrPredLabel] = label match { + case l: SchematicVarOrPredLabel => Set(l) + case _ => Set() } - - override def constantFunctions: Set[ConstantFunctionLabel] = args.foldLeft(Set.empty[ConstantFunctionLabel])((prev, next) => prev union next.constantFunctions) - override def schematicTerms: Set[SchematicTermLabel] = args.foldLeft(Set.empty[SchematicTermLabel])((prev, next) => prev union next.schematicTerms) + override def schematicConnectorLabels: Set[SchematicConnectorLabel] = Set() } /** * The formula counterpart of [[ConnectorLabel]]. */ sealed case class ConnectorFormula(label: ConnectorLabel, args: Seq[Formula]) extends Formula { - require(label.arity == -1 || label.arity == args.length) - override def freeVariables: Set[VariableLabel] = args.foldLeft(Set.empty[VariableLabel])((prev, next) => prev union next.freeVariables) - - override def constantFunctions: Set[ConstantFunctionLabel] = args.foldLeft(Set.empty[ConstantFunctionLabel])((prev, next) => prev union next.constantFunctions) - override def schematicTerms: Set[SchematicTermLabel] = args.foldLeft(Set.empty[SchematicTermLabel])((prev, next) => prev union next.schematicTerms) - - override def constantPredicates: Set[ConstantPredicateLabel] = args.foldLeft(Set.empty[ConstantPredicateLabel])((prev, next) => prev union next.constantPredicates) - override def schematicPredicates: Set[SchematicPredicateLabel] = args.foldLeft(Set.empty[SchematicPredicateLabel])((prev, next) => prev union next.schematicPredicates) + require(label.arity == args.size || label.arity == -1) + require(label.arity != 0) + override def constantTermLabels: Set[ConstantFunctionLabel] = + args.foldLeft(Set.empty[ConstantFunctionLabel])((prev, next) => prev union next.constantTermLabels) + override def schematicTermLabels: Set[SchematicTermLabel] = + args.foldLeft(Set.empty[SchematicTermLabel])((prev, next) => prev union next.schematicTermLabels) + override def freeSchematicTermLabels: Set[SchematicTermLabel] = + args.foldLeft(Set.empty[SchematicTermLabel])((prev, next) => prev union next.freeSchematicTermLabels) + override def freeVariables: Set[VariableLabel] = + args.foldLeft(Set.empty[VariableLabel])((prev, next) => prev union next.freeVariables) + override def constantPredicateLabels: Set[ConstantPredicateLabel] = + args.foldLeft(Set.empty[ConstantPredicateLabel])((prev, next) => prev union next.constantPredicateLabels) + override def schematicPredicateLabels: Set[SchematicVarOrPredLabel] = + args.foldLeft(Set.empty[SchematicVarOrPredLabel])((prev, next) => prev union next.schematicPredicateLabels) + override def schematicConnectorLabels: Set[SchematicConnectorLabel] = label match { + case l: ConstantConnectorLabel => + args.foldLeft(Set.empty[SchematicConnectorLabel])((prev, next) => prev union next.schematicConnectorLabels) + case l: SchematicConnectorLabel => + args.foldLeft(Set(l))((prev, next) => prev union next.schematicConnectorLabels) + } } /** * The formula counterpart of [[BinderLabel]]. */ sealed case class BinderFormula(label: BinderLabel, bound: VariableLabel, inner: Formula) extends Formula { + override def constantTermLabels: Set[ConstantFunctionLabel] = inner.constantTermLabels + override def schematicTermLabels: Set[SchematicTermLabel] = inner.schematicTermLabels + override def freeSchematicTermLabels: Set[SchematicTermLabel] = inner.freeSchematicTermLabels - bound override def freeVariables: Set[VariableLabel] = inner.freeVariables - bound - - override def constantFunctions: Set[ConstantFunctionLabel] = inner.constantFunctions - override def schematicTerms: Set[SchematicTermLabel] = inner.schematicTerms - bound - - override def constantPredicates: Set[ConstantPredicateLabel] = inner.constantPredicates - override def schematicPredicates: Set[SchematicPredicateLabel] = inner.schematicPredicates + override def constantPredicateLabels: Set[ConstantPredicateLabel] = inner.constantPredicateLabels + override def schematicPredicateLabels: Set[SchematicVarOrPredLabel] = inner.schematicPredicateLabels + override def schematicConnectorLabels: Set[SchematicConnectorLabel] = inner.schematicConnectorLabels } + /** + * Binds multiple variables at the same time + */ + @deprecated def bindAll(binder: BinderLabel, vars: Seq[VariableLabel], phi: Formula): Formula = vars.foldLeft(phi)((f, v) => BinderFormula(binder, v, f)) diff --git a/lisa-kernel/src/main/scala/lisa/kernel/fol/FormulaLabelDefinitions.scala b/lisa-kernel/src/main/scala/lisa/kernel/fol/FormulaLabelDefinitions.scala index d3252743d03d02a7a784f8a287e4d24c0941da14..fb6e3bc7ddd6efd5f086afc83afd809530d3a3c1 100644 --- a/lisa-kernel/src/main/scala/lisa/kernel/fol/FormulaLabelDefinitions.scala +++ b/lisa-kernel/src/main/scala/lisa/kernel/fol/FormulaLabelDefinitions.scala @@ -7,37 +7,26 @@ private[fol] trait FormulaLabelDefinitions extends CommonDefinitions { /** * The parent class of formula labels. - * It similar as with terms; they denote the Predicates and logical connector themselves, and not the terms they help forming. - * They label the nodes of a tree that defines a formula. - */ - sealed abstract class FormulaLabel extends Label with Ordered[FormulaLabel] { - def priority: Int = this match { - case _: ConstantPredicateLabel => 1 - case _: SchematicPredicateLabel => 2 - case _: ConnectorLabel => 3 - case _: BinderLabel => 4 - } - - /** - * Compare two formula labels by type, then by arity, then by id. - */ - def compare(that: FormulaLabel): Int = (this, that) match { - case (thi: ConstantPredicateLabel, tha: ConstantPredicateLabel) => (2 * (thi.arity compare tha.arity) + (thi.id compare tha.id)).sign - case (thi: SchematicPredicateLabel, tha: SchematicPredicateLabel) => (2 * (thi.arity compare tha.arity) + (thi.id compare tha.id)).sign - case (thi: ConnectorLabel, tha: ConnectorLabel) => thi.id compare tha.id - case (thi: BinderLabel, tha: BinderLabel) => thi.id compare tha.id - case _ => this.priority - that.priority - } - } + * These are labels that can be applied to nodes that form the tree of a formula. + * In logical terms, those labels are FOL symbols or predicate symbols, including equality. + */ + sealed abstract class FormulaLabel extends Label /** * The label for a predicate, namely a function taking a fixed number of terms and returning a formula. * In logical terms it is a predicate symbol. */ - sealed abstract class PredicateLabel extends FormulaLabel { + sealed trait PredicateLabel extends FormulaLabel { require(arity < MaxArity && arity >= 0) } + /** + * The label for a connector, namely a function taking a fixed number of formulas and returning another formula. + */ + sealed trait ConnectorLabel extends FormulaLabel { + require(arity < MaxArity && arity >= -1) + } + /** * A standard predicate symbol. Typical example are equality (=) and membership (∈) */ @@ -45,42 +34,53 @@ private[fol] trait FormulaLabelDefinitions extends CommonDefinitions { /** * The equality symbol (=) for first order logic. + * It is represented as any other predicate symbol but has unique semantic and deduction rules. */ val equality: ConstantPredicateLabel = ConstantPredicateLabel("=", 2) + val top: ConstantPredicateLabel = ConstantPredicateLabel("⊤", 0) + val bot: ConstantPredicateLabel = ConstantPredicateLabel("⊥", 0) /** - * A predicate symbol that can be instantiated with any formula. + * The label for a connector, namely a function taking a fixed number of formulas and returning another formula. */ - sealed abstract class SchematicPredicateLabel extends PredicateLabel with SchematicLabel + sealed abstract class ConstantConnectorLabel(val id: String, val arity: Int) extends ConnectorLabel with ConstantLabel + case object Neg extends ConstantConnectorLabel("¬", 1) + + case object Implies extends ConstantConnectorLabel("⇒", 2) + + case object Iff extends ConstantConnectorLabel("↔", 2) + + case object And extends ConstantConnectorLabel("∧", -1) + + case object Or extends ConstantConnectorLabel("∨", -1) /** - * A predicate symbol of non-zero arity that can be instantiated with any formula taking arguments. + * A schematic symbol that can be instantiated with some formula. + * We distinguish arity-0 schematic formula labels, arity->1 schematic predicates and arity->1 schematic connectors. */ - sealed case class SchematicNPredicateLabel(id: String, arity: Int) extends SchematicPredicateLabel + sealed trait SchematicFormulaLabel extends FormulaLabel with SchematicLabel + + /** + * A schematic symbol whose arguments are any number of Terms. This means the symbol is either a variable formula or a predicate schema + */ + sealed trait SchematicVarOrPredLabel extends SchematicFormulaLabel with PredicateLabel /** * A predicate symbol of arity 0 that can be instantiated with any formula. */ - sealed case class VariableFormulaLabel(id: String) extends SchematicPredicateLabel { + sealed case class VariableFormulaLabel(id: String) extends SchematicVarOrPredLabel { val arity = 0 } /** - * The label for a connector, namely a function taking a fixed number of formulas and returning another formula. + * A predicate symbol of non-zero arity that can be instantiated with any functional formula taking term arguments. */ - sealed abstract class ConnectorLabel(val id: String, val arity: Int) extends FormulaLabel { - require(arity < MaxArity && arity >= -1) - } - - case object Neg extends ConnectorLabel("¬", 1) - - case object Implies extends ConnectorLabel("⇒", 2) + sealed case class SchematicPredicateLabel(id: String, arity: Int) extends SchematicVarOrPredLabel - case object Iff extends ConnectorLabel("↔", 2) - - case object And extends ConnectorLabel("∧", -1) - - case object Or extends ConnectorLabel("∨", -1) + /** + * A predicate symbol of non-zero arity that can be instantiated with any functional formula taking formula arguments. + */ + sealed case class SchematicConnectorLabel(id: String, arity: Int) extends SchematicFormulaLabel with ConnectorLabel /** * The label for a binder, namely an object with a body that has the ability to bind variables in it. @@ -89,12 +89,24 @@ private[fol] trait FormulaLabelDefinitions extends CommonDefinitions { val arity = 1 } + /** + * The symbol of the universal quantifier ∀ + */ case object Forall extends BinderLabel(id = "∀") + /** + * The symbol of the existential quantifier ∃ + */ case object Exists extends BinderLabel(id = "∃") + /** + * The symbol of the quantifier for existence and unicity ∃! + */ case object ExistsOne extends BinderLabel(id = "∃!") - def isSame(l: FormulaLabel, r: FormulaLabel): Boolean = (l compare r) == 0 + /** + * A function returning true if and only if the two symbols are considered "the same", i.e. same category, same arity and same id. + */ + def isSame(l: FormulaLabel, r: FormulaLabel): Boolean = l == r } diff --git a/lisa-kernel/src/main/scala/lisa/kernel/fol/Substitutions.scala b/lisa-kernel/src/main/scala/lisa/kernel/fol/Substitutions.scala index edc866474c905103753647a745e61d235fd6ae73..7f8102696294c9bde372f4cbf59c63601643474d 100644 --- a/lisa-kernel/src/main/scala/lisa/kernel/fol/Substitutions.scala +++ b/lisa-kernel/src/main/scala/lisa/kernel/fol/Substitutions.scala @@ -3,7 +3,8 @@ package lisa.kernel.fol trait Substitutions extends FormulaDefinitions { /** - * A lambda term to express a "term with holes". Main use is to be substituted in place of a function schema. + * A lambda term to express a "term with holes". Main use is to be substituted in place of a function schema or variable. + * Also used for some deduction rules. * Morally equivalent to a 2-tuples containing the same informations. * @param vars The names of the "holes" in the term, necessarily of arity 0. The bound variables of the functional term. * @param body The term represented by the object, up to instantiation of the bound schematic variables in args. @@ -13,7 +14,8 @@ trait Substitutions extends FormulaDefinitions { } /** - * A lambda formula to express a "formula with holes". Main use is to be substituted in place of a predicate schema. + * A lambda formula to express a "formula with term holes". Main use is to be substituted in place of a predicate schema. + * Also used for some deduction rules. * Morally equivalent to a 2-tuples containing the same informations. * @param vars The names of the "holes" in a formula, necessarily of arity 0. The bound variables of the functional formula. * @param body The formula represented by the object, up to instantiation of the bound schematic variables in args. @@ -22,17 +24,20 @@ trait Substitutions extends FormulaDefinitions { def apply(args: Seq[Term]): Formula = { substituteVariables(body, (vars zip args).toMap) } - // def instantiateFunctionSchemas(phi: Formula, m: Map[SchematicFunctionLabel, LambdaTermTerm]):Formula = ??? } /** - * A lambda formula to express a "formula with holes". Usefull for rules such as Iff substitution + * A lambda formula to express a "formula with formula holes". Main use is to be substituted in place of a connector schema. + * Also used for some deduction rules. * Morally equivalent to a 2-tuples containing the same informations. * @param vars The names of the "holes" in a formula, necessarily of arity 0. * @param body The formula represented by the object, up to instantiation of the bound schematic variables in args. */ case class LambdaFormulaFormula(vars: Seq[VariableFormulaLabel], body: Formula) { - def apply(args: Seq[Formula]): Formula = instantiatePredicateSchemas(body, (vars zip (args map (LambdaTermFormula(Nil, _)))).toMap) + def apply(args: Seq[Formula]): Formula = { + substituteFormulaVariables(body, (vars zip args).toMap) + // instantiatePredicateSchemas(body, (vars zip (args map (LambdaTermFormula(Nil, _)))).toMap) + } } ////////////////////////// @@ -40,34 +45,33 @@ trait Substitutions extends FormulaDefinitions { ////////////////////////// /** - * Performs simultaneous substitution of multiple variables by multiple terms in a term t. + * Performs simultaneous substitution of multiple variables by multiple terms in a term. * @param t The base term * @param m A map from variables to terms. * @return t[m] */ def substituteVariables(t: Term, m: Map[VariableLabel, Term]): Term = t match { - case VariableTerm(label) => m.getOrElse(label, t) - case FunctionTerm(label, args) => FunctionTerm(label, args.map(substituteVariables(_, m))) + case Term(label: VariableLabel, _) => m.getOrElse(label, t) + case Term(label, args) => Term(label, args.map(substituteVariables(_, m))) } /** * Performs simultaneous substitution of schematic function symbol by "functional" terms, or terms with holes. * If the arity of one of the function symbol to substitute doesn't match the corresponding number of arguments, it will produce an error. * @param t The base term - * @param m The map from schematic function symbols to "terms with holes". A such term is a pair containing a list of - * variable symbols (holes) and a term that is the body of the functional term. + * @param m The map from schematic function symbols to lambda expressions Term(s) -> Term [[LambdaTermTerm]]. * @return t[m] */ def instantiateTermSchemas(t: Term, m: Map[SchematicTermLabel, LambdaTermTerm]): Term = { require(m.forall { case (symbol, LambdaTermTerm(arguments, body)) => arguments.length == symbol.arity }) t match { - case VariableTerm(label) => m.get(label).map(_.apply(Nil)).getOrElse(t) - case FunctionTerm(label, args) => + case Term(label: VariableLabel, _) => m.get(label).map(_.apply(Nil)).getOrElse(t) + case Term(label, args) => val newArgs = args.map(instantiateTermSchemas(_, m)) label match { - case label: ConstantFunctionLabel => FunctionTerm(label, newArgs) - case label: SchematicFunctionLabel => - m.get(label).map(_(newArgs)).getOrElse(FunctionTerm(label, newArgs)) + case label: ConstantFunctionLabel => Term(label, newArgs) + case label: SchematicTermLabel => + m.get(label).map(_(newArgs)).getOrElse(Term(label, newArgs)) } } } @@ -77,13 +81,13 @@ trait Substitutions extends FormulaDefinitions { ///////////////////////////// /** - * Performs simultaneous substitution of multiple variables by multiple terms in a formula f. + * Performs simultaneous substitution of multiple variables by multiple terms in a formula. * - * @param f The base formula - * @param m A map from variables to terms. + * @param phi The base formula + * @param m A map from variables to terms * @return t[m] */ - def substituteVariables(f: Formula, m: Map[VariableLabel, Term]): Formula = f match { + def substituteVariables(phi: Formula, m: Map[VariableLabel, Term]): Formula = phi match { case PredicateFormula(label, args) => PredicateFormula(label, args.map(substituteVariables(_, m))) case ConnectorFormula(label, args) => ConnectorFormula(label, args.map(substituteVariables(_, m))) case BinderFormula(label, bound, inner) => @@ -97,13 +101,32 @@ trait Substitutions extends FormulaDefinitions { } /** - * Performs simultaneous substitution of schematic function symbol by "functional" terms, or terms with holes. - * If the arity of one of the function symbol to substitute doesn't match the corresponding number of arguments, it will produce an error. + * Performs simultaneous substitution of multiple formula variables by multiple formula terms in a formula. + * * @param phi The base formula - * @param m The map from schematic function symbols to "terms with holes". A such term is a pair containing a list of - * variable symbols (holes) and a term that is the body of the functional term. + * @param m A map from variables to terms * @return t[m] */ + def substituteFormulaVariables(phi: Formula, m: Map[VariableFormulaLabel, Formula]): Formula = phi match { + case PredicateFormula(label: VariableFormulaLabel, _) => m.getOrElse(label, phi) + case _: PredicateFormula => phi + case ConnectorFormula(label, args) => ConnectorFormula(label, args.map(substituteFormulaVariables(_, m))) + case BinderFormula(label, bound, inner) => + val fv = m.values.flatMap(_.freeVariables).toSet + if (fv.contains(bound)) { + val newBoundVariable = VariableLabel(freshId(fv.map(_.name), bound.name)) + val newInner = substituteVariables(inner, Map(bound -> VariableTerm(newBoundVariable))) + BinderFormula(label, newBoundVariable, substituteFormulaVariables(newInner, m)) + } else BinderFormula(label, bound, substituteFormulaVariables(inner, m)) + } + + /** + * Performs simultaneous substitution of schematic function symbol by "functional" terms, or terms with holes. + * If the arity of one of the predicate symbol to substitute doesn't match the corresponding number of arguments, it will produce an error. + * @param phi The base formula + * @param m The map from schematic function symbols to lambda expressions Term(s) -> Term [[LambdaTermTerm]]. + * @return phi[m] + */ def instantiateTermSchemas(phi: Formula, m: Map[SchematicTermLabel, LambdaTermTerm]): Formula = { require(m.forall { case (symbol, LambdaTermTerm(arguments, body)) => arguments.length == symbol.arity }) phi match { @@ -122,19 +145,18 @@ trait Substitutions extends FormulaDefinitions { /** * Instantiate a schematic predicate symbol in a formula, using higher-order instantiation. - * + * If the arity of one of the connector symbol to substitute doesn't match the corresponding number of arguments, it will produce an error. * @param phi The base formula - * @param m The map from schematic function symbols to "terms with holes". A such term is a pair containing a list of - * variable symbols (holes) and a term that is the body of the functional term. - * @return t[m] + * @param m The map from schematic predicate symbols to lambda expressions Term(s) -> Formula [[LambdaTermFormula]]. + * @return phi[m] */ - def instantiatePredicateSchemas(phi: Formula, m: Map[SchematicPredicateLabel, LambdaTermFormula]): Formula = { + def instantiatePredicateSchemas(phi: Formula, m: Map[SchematicVarOrPredLabel, LambdaTermFormula]): Formula = { require(m.forall { case (symbol, LambdaTermFormula(arguments, body)) => arguments.length == symbol.arity }) phi match { case PredicateFormula(label, args) => label match { case label: SchematicPredicateLabel if m.contains(label) => m(label)(args) - case label => phi + case _ => phi } case ConnectorFormula(label, args) => ConnectorFormula(label, args.map(instantiatePredicateSchemas(_, m))) case BinderFormula(label, bound, inner) => @@ -147,6 +169,33 @@ trait Substitutions extends FormulaDefinitions { } } + /** + * Instantiate a schematic connector symbol in a formula, using higher-order instantiation. + * + * @param phi The base formula + * @param m The map from schematic function symbols to lambda expressions Formula(s) -> Formula [[LambdaFormulaFormula]]. + * @return phi[m] + */ + def instantiateConnectorSchemas(phi: Formula, m: Map[SchematicConnectorLabel, LambdaFormulaFormula]): Formula = { + require(m.forall { case (symbol, LambdaFormulaFormula(arguments, body)) => arguments.length == symbol.arity }) + phi match { + case _: PredicateFormula => phi + case ConnectorFormula(label, args) => + label match { + case label: SchematicConnectorLabel if m.contains(label) => m(label)(args) + case _ => ConnectorFormula(label, args.map(instantiateConnectorSchemas(_, m))) + } + case BinderFormula(label, bound, inner) => + val fv: Set[VariableLabel] = (m.flatMap { case (symbol, LambdaFormulaFormula(arguments, body)) => body.freeVariables }).toSet + if (fv.contains(bound)) { + val newBoundVariable = VariableLabel(freshId(fv.map(_.name), bound.name)) + val newInner = substituteVariables(inner, Map(bound -> VariableTerm(newBoundVariable))) + BinderFormula(label, newBoundVariable, instantiateConnectorSchemas(newInner, m)) + } else BinderFormula(label, bound, instantiateConnectorSchemas(inner, m)) + } + } + + @deprecated def instantiateBinder(f: BinderFormula, t: Term): Formula = substituteVariables(f.inner, Map(f.bound -> t)) } diff --git a/lisa-kernel/src/main/scala/lisa/kernel/fol/TermDefinitions.scala b/lisa-kernel/src/main/scala/lisa/kernel/fol/TermDefinitions.scala index 23359f1017e90ddbc870968ed026005ba27cd74a..58d7ad100d9366f34dd9391e59f6b24ecaf9e8fb 100644 --- a/lisa-kernel/src/main/scala/lisa/kernel/fol/TermDefinitions.scala +++ b/lisa-kernel/src/main/scala/lisa/kernel/fol/TermDefinitions.scala @@ -7,63 +7,70 @@ private[fol] trait TermDefinitions extends TermLabelDefinitions { protected trait TreeWithLabel[A] { val label: A + val arity: Int /** - * @return The list of free variables in the term + * @return The list of free variables in the tree. */ def freeVariables: Set[VariableLabel] /** * @return The list of constant (i.e. non schematic) function symbols, including of arity 0. */ - def constantFunctions: Set[ConstantFunctionLabel] + def constantTermLabels: Set[ConstantFunctionLabel] /** - * @return The list of schematic function symbols (including free variables) in the term + * @return The list of schematic term symbols (including free and bound variables) in the tree. */ - def schematicTerms: Set[SchematicTermLabel] - } + def schematicTermLabels: Set[SchematicTermLabel] - /** - * The parent classes of terms. - * A term is a tree with nodes labeled by functions labels or variables. - * The number of children of a node is restricted by the arity imposed by the label. - */ - sealed abstract class Term extends TreeWithLabel[TermLabel] - - /** - * A term which consists of a single variable. - * - * @param label The label of the variable. - */ - sealed case class VariableTerm(label: VariableLabel) extends Term { - override def freeVariables: Set[VariableLabel] = Set(label) - - override def constantFunctions: Set[ConstantFunctionLabel] = Set.empty - override def schematicTerms: Set[SchematicTermLabel] = Set(label) + /** + * @return The list of schematic term symbols (excluding bound variables) in the tree. + */ + def freeSchematicTermLabels: Set[SchematicTermLabel] } /** - * A term labelled by a function symbol. It must contain a number of children equal to the arity of the symbol - * + * A term labelled by a function symbol. It must contain a number of children equal to the arity of the symbol. + * The label can be a constant or schematic term label of any arity, including a variable label. * @param label The label of the node * @param args children of the node. The number of argument must be equal to the arity of the function. */ - sealed case class FunctionTerm(label: FunctionLabel, args: Seq[Term]) extends Term { + sealed case class Term(label: TermLabel, args: Seq[Term]) extends TreeWithLabel[TermLabel] { require(label.arity == args.size) + val arity: Int = label.arity - override def freeVariables: Set[VariableLabel] = args.foldLeft(Set.empty[VariableLabel])((prev, next) => prev union next.freeVariables) + override def freeVariables: Set[VariableLabel] = label match { + case l: VariableLabel => Set(l) + case _ => args.foldLeft(Set.empty[VariableLabel])((prev, next) => prev union next.freeVariables) + } - override def constantFunctions: Set[ConstantFunctionLabel] = label match { - case l: ConstantFunctionLabel => args.foldLeft(Set.empty[ConstantFunctionLabel])((prev, next) => prev union next.constantFunctions) + l - case l: SchematicFunctionLabel => args.foldLeft(Set.empty[ConstantFunctionLabel])((prev, next) => prev union next.constantFunctions) + override def constantTermLabels: Set[ConstantFunctionLabel] = label match { + case l: ConstantFunctionLabel => args.foldLeft(Set.empty[ConstantFunctionLabel])((prev, next) => prev union next.constantTermLabels) + l + case l: SchematicTermLabel => args.foldLeft(Set.empty[ConstantFunctionLabel])((prev, next) => prev union next.constantTermLabels) } - override def schematicTerms: Set[SchematicTermLabel] = label match { - case l: ConstantFunctionLabel => args.foldLeft(Set.empty[SchematicTermLabel])((prev, next) => prev union next.schematicTerms) - case l: SchematicFunctionLabel => args.foldLeft(Set.empty[SchematicTermLabel])((prev, next) => prev union next.schematicTerms) + l + override def schematicTermLabels: Set[SchematicTermLabel] = label match { + case l: ConstantFunctionLabel => args.foldLeft(Set.empty[SchematicTermLabel])((prev, next) => prev union next.schematicTermLabels) + case l: SchematicTermLabel => args.foldLeft(Set.empty[SchematicTermLabel])((prev, next) => prev union next.schematicTermLabels) + l } + override def freeSchematicTermLabels: Set[SchematicTermLabel] = schematicTermLabels + } + + /** + * A VariableTerm is exactly an arity-0 term whose label is a variable label, but we provide specific constructors and destructors. + */ + object VariableTerm extends (VariableLabel => Term) { - val arity: Int = args.size + /** + * A term which consists of a single variable. + * + * @param label The label of the variable. + */ + def apply(label: VariableLabel): Term = Term(label, Seq()) + def unapply(t: Term): Option[VariableLabel] = t.label match { + case l: VariableLabel => Some(l) + case _ => None + } } } diff --git a/lisa-kernel/src/main/scala/lisa/kernel/fol/TermLabelDefinitions.scala b/lisa-kernel/src/main/scala/lisa/kernel/fol/TermLabelDefinitions.scala index 3c2ca18d6023d055ff82e040edd23b27c3ad4bee..cb8b2aeb95a9e9a678db0b1e972541d697843dbc 100644 --- a/lisa-kernel/src/main/scala/lisa/kernel/fol/TermLabelDefinitions.scala +++ b/lisa-kernel/src/main/scala/lisa/kernel/fol/TermLabelDefinitions.scala @@ -9,58 +9,38 @@ private[fol] trait TermLabelDefinitions extends CommonDefinitions { * The parent class of term labels. * These are labels that can be applied to nodes that form the tree of a term. * For example, Powerset is not a term itself, it's a label for a node with a single child in a tree corresponding to a term. - * In logical terms, those labels are essentially symbols of sme language. + * In logical terms, those labels are essentially symbols of some language. */ - sealed abstract class TermLabel extends Label with Ordered[TermLabel] { - def priority: Int = this match { - case _: VariableLabel => 1 - case _: ConstantFunctionLabel => 2 - case _: SchematicFunctionLabel => 3 - } - - /** - * Sorts labels according to first whether the term is a variable or function, then according to arity, then to the id. - */ - def compare(that: TermLabel): Int = (this, that) match { - case (thi: VariableLabel, tha: VariableLabel) => thi.id compare tha.id - case (thi: ConstantFunctionLabel, tha: ConstantFunctionLabel) => (2 * (thi.arity compare tha.arity) + (thi.id compare tha.id)).sign - case (thi: SchematicFunctionLabel, tha: SchematicFunctionLabel) => (2 * (thi.arity compare tha.arity) + (thi.id compare tha.id)).sign - case _ => this.priority - that.priority - } - } - - /** - * The label of a function-like term. Constants are functions of arity 0. - * There are two kinds of function symbols: Standards and schematic. - * Standard function symbols denote a particular function. Schematic function symbols - * can be instantiated with any term. This is particularly useful to express axiom schemas. - */ - sealed abstract class FunctionLabel extends TermLabel { + sealed abstract class TermLabel extends Label { require(arity >= 0 && arity < MaxArity) } /** - * A function symbol. + * A fixed function symbol. If arity is 0, it is just a regular constant symbol. * * @param id The name of the function symbol. * @param arity The arity of the function symbol. A function symbol of arity 0 is a constant */ - sealed case class ConstantFunctionLabel(id: String, arity: Int) extends FunctionLabel with ConstantLabel + sealed case class ConstantFunctionLabel(id: String, arity: Int) extends TermLabel with ConstantLabel - sealed trait SchematicTermLabel extends TermLabel {} + /** + * A schematic symbol which is uninterpreted and can be substituted by functional term of the same arity. + * We distinguish arity 0 schematic term labels which we call variables and can be bound, and arity>1 schematic symbols. + */ + sealed trait SchematicTermLabel extends TermLabel with SchematicLabel {} /** * A schematic function symbol that can be substituted. * * @param id The name of the function symbol. - * @param arity The arity of the function symbol. A function symbol of arity 0 is a constant + * @param arity The arity of the function symbol. Must be greater than 1. */ - sealed case class SchematicFunctionLabel(id: String, arity: Int) extends FunctionLabel with SchematicTermLabel { + sealed case class SchematicFunctionLabel(id: String, arity: Int) extends SchematicTermLabel { require(arity >= 1 && arity < MaxArity) } /** - * The label of a term which is a variable. + * The label of a term which is a variable. Can be bound in a formulas, or substituted for an arbitrary term. * * @param id The name of the variable, for example "x" or "y". */ @@ -70,7 +50,7 @@ private[fol] trait TermLabelDefinitions extends CommonDefinitions { } /** - * A function returning true if and only if the two symbols are considered "the same". + * A function returning true if and only if the two symbols are considered "the same", i.e. same category, same arity and same id. */ - def isSame(l: TermLabel, r: TermLabel): Boolean = (l compare r) == 0 + def isSame(l: TermLabel, r: TermLabel): Boolean = l == r } diff --git a/lisa-kernel/src/main/scala/lisa/kernel/proof/RunningTheory.scala b/lisa-kernel/src/main/scala/lisa/kernel/proof/RunningTheory.scala index 5d17880e22ae33e408c0f5d10612bd119b90638b..b6c4ec291900a8924ad62d5d470c849001a5606c 100644 --- a/lisa-kernel/src/main/scala/lisa/kernel/proof/RunningTheory.scala +++ b/lisa-kernel/src/main/scala/lisa/kernel/proof/RunningTheory.scala @@ -68,17 +68,6 @@ class RunningTheory { private[proof] val knownSymbols: mMap[String, ConstantLabel] = mMap(equality.id -> equality) - /** - * Check if a label is a symbol of the theory - */ - - def isSymbol(label: ConstantLabel): Boolean = label match { - case c: ConstantFunctionLabel => funDefinitions.contains(c) - case c: ConstantPredicateLabel => predDefinitions.contains(c) - } - - def isAvailable(label: ConstantLabel): Boolean = !knownSymbols.contains(label.id) - /** * From a given proof, if it is true in the Running theory, add that theorem to the theory and returns it. * The proof's imports must be justified by the list of justification, and the conclusion of the theorem @@ -113,15 +102,14 @@ class RunningTheory { * and the formula can't contain symbols that are not in the theory. * * @param label The desired label. - * @param args The variables representing the arguments of the predicate in the formula phi. - * @param phi The formula defining the predicate. + * @param expression The functional formula defining the predicate. * @return A definition object if the parameters are correct, */ def makePredicateDefinition(label: ConstantPredicateLabel, expression: LambdaTermFormula): RunningTheoryJudgement[this.PredicateDefinition] = { val LambdaTermFormula(vars, body) = expression if (belongsToTheory(body)) if (isAvailable(label)) - if (body.schematicTerms.subsetOf(vars.toSet) && body.schematicPredicates.isEmpty) { + if (body.freeSchematicTermLabels.subsetOf(vars.toSet) && body.schematicPredicateLabels.isEmpty) { val newDef = PredicateDefinition(label, expression) predDefinitions.update(label, Some(newDef)) knownSymbols.update(label.id, label) @@ -141,9 +129,8 @@ class RunningTheory { * @param proof The proof of existence and uniqueness * @param justifications The justifications of the proof. * @param label The desired label. - * @param args The variables representing the arguments of the predicate in the formula phi. + * @param expression The functional term defining the function symbol. * @param out The variable representing the function's result in the formula - * @param phi The formula defining the predicate. * @return A definition object if the parameters are correct, */ def makeFunctionDefinition( @@ -156,7 +143,7 @@ class RunningTheory { val LambdaTermFormula(vars, body) = expression if (belongsToTheory(body)) if (isAvailable(label)) { - if (body.schematicTerms.subsetOf((vars appended out).toSet) && body.schematicPredicates.isEmpty) + if (body.freeSchematicTermLabels.subsetOf((vars appended out).toSet) && body.schematicPredicateLabels.isEmpty) { if (proof.imports.forall(i => justifications.exists(j => isSameSequent(i, sequentFromJustification(j))))) { val r = SCProofChecker.checkSCProof(proof) r match { @@ -175,7 +162,13 @@ class RunningTheory { case r @ SCProofCheckerJudgement.SCInvalidProof(_, path, message) => InvalidJustification("The given proof is incorrect: " + message, Some(r)) } } else InvalidJustification("Not all imports of the proof are correctly justified.", None) - else InvalidJustification("The definition is not allowed to contain schematic symbols or free variables.", None) + } else { + println(body.schematicTermLabels.subsetOf((vars appended out).toSet)) + println(body.schematicTermLabels) + println((vars appended out).toSet) + println(body.schematicPredicateLabels.isEmpty) + InvalidJustification("The definition is not allowed to contain schematic symbols or free variables.", None) + } } else InvalidJustification("The specified symbol id is already part of the theory and can't be redefined.", None) else InvalidJustification("All symbols in the conclusion of the proof must belong to the theory. You need to add missing symbols to the theory.", None) } @@ -184,7 +177,7 @@ class RunningTheory { case Theorem(name, proposition) => proposition case Axiom(name, ax) => Sequent(Set.empty, Set(ax)) case PredicateDefinition(label, LambdaTermFormula(vars, body)) => - val inner = ConnectorFormula(Iff, Seq(PredicateFormula(label, vars.map(VariableTerm)), body)) + val inner = ConnectorFormula(Iff, Seq(PredicateFormula(label, vars.map(VariableTerm.apply)), body)) Sequent(Set(), Set(inner)) case FunctionDefinition(label, out, LambdaTermFormula(vars, body)) => val inner = BinderFormula( @@ -193,7 +186,7 @@ class RunningTheory { ConnectorFormula( Iff, Seq( - PredicateFormula(equality, Seq(FunctionTerm(label, vars.map(VariableTerm)), VariableTerm(out))), + PredicateFormula(equality, Seq(Term(label, vars.map(VariableTerm.apply)), VariableTerm(out))), body ) ) @@ -202,6 +195,54 @@ class RunningTheory { } + /** + * Add a new axiom to the Theory. For example, if the theory contains the language and theorems + * of Zermelo-Fraenkel Set Theory, this function may add the axiom of choice to it. + * If the axiom belongs to the language of the theory, adds it and return true. Else, returns false. + * + * @param f the new axiom to be added. + * @return true if the axiom was added to the theory, false else. + */ + def addAxiom(name: String, f: Formula): Boolean = { + if (belongsToTheory(f)) { + theoryAxioms.update(name, Axiom(name, f)) + true + } else false + } + + /** + * Add a new symbol to the theory, without providing a definition. An ad-hoc definition can be + * added via an axiom, typically if the desired object is not derivable in the base theory itself. + * For example, This function can add the empty set symbol to a theory, and then an axiom asserting + * that it is empty can be introduced as well. + */ + + def addSymbol(s: ConstantLabel): Unit = { + if (isAvailable(s)) { + knownSymbols.update(s.id, s) + s match { + case c: ConstantFunctionLabel => funDefinitions.update(c, None) + case c: ConstantPredicateLabel => predDefinitions.update(c, None) + } + } else {} + } + + /** + * Add all constant symbols in the sequent. Note that this can't be reversed and will prevent from giving them a definition later. + */ + def makeFormulaBelongToTheory(phi: Formula): Unit = { + phi.constantPredicateLabels.foreach(addSymbol) + phi.constantTermLabels.foreach(addSymbol) + } + + /** + * Add all constant symbols in the sequent. Note that this can't be reversed and will prevent from giving them a definition later. + */ + def makeSequentBelongToTheory(s: Sequent): Unit = { + s.left.foreach(makeFormulaBelongToTheory) + s.right.foreach(makeFormulaBelongToTheory) + } + /** * Verify if a given formula belongs to some language * @@ -212,112 +253,97 @@ class RunningTheory { case PredicateFormula(label, args) => label match { case l: ConstantPredicateLabel => isSymbol(l) && args.forall(belongsToTheory) - case _: SchematicPredicateLabel => args.forall(belongsToTheory) + case _ => args.forall(belongsToTheory) } case ConnectorFormula(label, args) => args.forall(belongsToTheory) case BinderFormula(label, bound, inner) => belongsToTheory(inner) } - def makeFormulaBelongToTheory(phi: Formula): Unit = { - phi.constantPredicates.foreach(addSymbol) - phi.constantFunctions.foreach(addSymbol) - } - /** - * Verify if a given term belongs to some language + * Verify if a given term belongs to the language of the theory. * * @param t The term to check - * @return Weather t belongs to the specified language + * @return Weather t belongs to the specified language. */ def belongsToTheory(t: Term): Boolean = t match { - case VariableTerm(label) => true - case FunctionTerm(label, args) => + case Term(label, args) => label match { - case l: ConstantFunctionLabel => isSymbol(l) && args.forall(belongsToTheory(_)) - case _: SchematicFunctionLabel => args.forall(belongsToTheory(_)) + case l: ConstantFunctionLabel => isSymbol(l) && args.forall(belongsToTheory) + case _: SchematicTermLabel => args.forall(belongsToTheory) } } /** - * Verify if a given sequent belongs to some language + * Verify if a given sequent belongs to the language of the theory. * * @param s The sequent to check * @return Weather s belongs to the specified language */ def belongsToTheory(s: Sequent): Boolean = - s.left.forall(belongsToTheory(_)) && s.right.forall(belongsToTheory(_)) - - def makeSequentBelongToTheory(s: Sequent): Unit = { - s.left.foreach(makeFormulaBelongToTheory) - s.right.foreach(makeFormulaBelongToTheory) - } + s.left.forall(belongsToTheory) && s.right.forall(belongsToTheory) /** - * Add a new axiom to the Theory. For example, if the theory contains the language and theorems - * of Zermelo-Fraenkel Set Theory, this function can add the axiom of choice to it. - * If the axiom belongs to the language of the theory, adds it and return true. Else, returns false. + * Public accessor to the set of symbol currently in the theory's language. * - * @param f the new axiom to be added. - * @return true if the axiom was added to the theory, false else. + * @return the set of symbol currently in the theory's language. */ - def addAxiom(name: String, f: Formula): Boolean = { - if (belongsToTheory(f)) { - theoryAxioms.update(name, Axiom(name, f)) - true - } else false - } + def language(): List[(ConstantLabel, Option[Definition])] = funDefinitions.toList ++ predDefinitions.toList /** - * Add a new symbol to the theory, without providing a definition. An ad-hoc definition can be - * added via an axiom, typically if the desired object is not derivable in the base theory itself. - * For example, This function can add the empty set symbol to a theory, and then an axiom asserting - * the it is empty can be introduced as well. + * Check if a label is a symbol of the theory. */ - - def addSymbol(s: ConstantLabel): Unit = { - if (isAvailable(s)) { - knownSymbols.update(s.id, s) - s match { - case c: ConstantFunctionLabel => funDefinitions.update(c, None) - case c: ConstantPredicateLabel => predDefinitions.update(c, None) - } - } else {} + def isSymbol(label: ConstantLabel): Boolean = label match { + case c: ConstantFunctionLabel => funDefinitions.contains(c) + case c: ConstantPredicateLabel => predDefinitions.contains(c) } /** - * Public accessor to the set of symbol currently in the theory's language. - * - * @return the set of symbol currently in the theory's language. + * Check if a label is not already used in the theory. + * @return */ - - def language: List[(ConstantLabel, Option[Definition])] = funDefinitions.toList ++ predDefinitions.toList + def isAvailable(label: ConstantLabel): Boolean = !knownSymbols.contains(label.id) /** * Public accessor to the current set of axioms of the theory * * @return the current set of axioms of the theory */ - def axiomsList: Iterable[Axiom] = theoryAxioms.values + def axiomsList(): Iterable[Axiom] = theoryAxioms.values /** * Verify if a given formula is an axiom of the theory - * - * @param f the candidate axiom - * @return wether f is an axiom of the theory */ def isAxiom(f: Formula): Boolean = theoryAxioms.exists(a => isSame(a._2.ax, f)) + /** + * Get the Axiom that is the same as the given formula, if it exists in the theory. + */ def getAxiom(f: Formula): Option[Axiom] = theoryAxioms.find(a => isSame(a._2.ax, f)).map(_._2) + /** + * Get the definition of the given label, if it is defined in the theory. + */ def getDefinition(label: ConstantPredicateLabel): Option[PredicateDefinition] = predDefinitions.get(label).flatten + /** + * Get the definition of the given label, if it is defined in the theory. + */ def getDefinition(label: ConstantFunctionLabel): Option[FunctionDefinition] = funDefinitions.get(label).flatten + /** + * Get the Axiom with the given name, if it exists in the theory. + */ def getAxiom(name: String): Option[Axiom] = theoryAxioms.get(name) + /** + * Get the Theorem with the given name, if it exists in the theory. + */ def getTheorem(name: String): Option[Theorem] = theorems.get(name) + /** + * Get the definition for the given identifier, if it is defined in the theory. + */ def getDefinition(name: String): Option[Definition] = knownSymbols.get(name).flatMap { case f: ConstantPredicateLabel => getDefinition(f) case f: ConstantFunctionLabel => getDefinition(f) diff --git a/lisa-kernel/src/main/scala/lisa/kernel/proof/SCProof.scala b/lisa-kernel/src/main/scala/lisa/kernel/proof/SCProof.scala index de5b860c061a596c3a751138713dc2218f60fd96..fb4e5c799777c7ec2e8f4938fbf348bc90f95b58 100644 --- a/lisa-kernel/src/main/scala/lisa/kernel/proof/SCProof.scala +++ b/lisa-kernel/src/main/scala/lisa/kernel/proof/SCProof.scala @@ -28,7 +28,7 @@ case class SCProof(steps: IndexedSeq[SCProofStep], imports: IndexedSeq[Sequent] /** * Get the ith sequent of the proof. If the index is positive, give the bottom sequent of proof step number i. - * If the index is positive, return the (-i-1)th imported sequent. + * If the index is negative, return the <code>(-i-1)</code>th imported sequent. * * @param i The reference number of a sequent in the proof * @return A sequent, either imported or reached during the proof. diff --git a/lisa-kernel/src/main/scala/lisa/kernel/proof/SCProofChecker.scala b/lisa-kernel/src/main/scala/lisa/kernel/proof/SCProofChecker.scala index b11218c3c4952f4a8c5ad968090ca6bff99d6c60..822bd7fa92c501e34ec20ebee400d11ba49d9927 100644 --- a/lisa-kernel/src/main/scala/lisa/kernel/proof/SCProofChecker.scala +++ b/lisa-kernel/src/main/scala/lisa/kernel/proof/SCProofChecker.scala @@ -6,19 +6,15 @@ import lisa.kernel.proof.SequentCalculus._ object SCProofChecker { - private object Set { - def unapplySeq[T](s: Set[T]): Option[Seq[T]] = Some(s.toSeq) - } - /** - * This function verifies that a single SCProofStep is correctly applied. It verify that the step only refers to sequents with a lower number, and that - * the type and parameters of the proofstep correspond to the sequent claimed sequent. + * This function verifies that a single SCProofStep is correctly applied. It verifies that the step only refers to sequents with a lower number, + * and that the type, premises and parameters of the proof step correspond to the claimed conclusion. * * @param no The number of the given proof step. Needed to vewrify that the proof step doesn't refer to posterior sequents. * @param step The proof step whose correctness needs to be checked * @param references A function that associates sequents to a range of positive and negative integers that the proof step may refer to. Typically, * a proof's [[SCProof.getSequent]] function. - * @return + * @return A Judgement about the correctness of the proof step. */ def checkSingleSCStep(no: Int, step: SCProofStep, references: Int => Sequent, importsSize: Option[Int] = None): SCProofCheckerJudgement = { val ref = references @@ -39,6 +35,15 @@ object SCProofChecker { */ case Rewrite(s, t1) => if (isSameSequent(s, ref(t1))) SCValidProof(SCProof(step)) else SCInvalidProof(SCProof(step), Nil, s"The premise and the conclusion are not trivially equivalent.") + + /* + * + * ------------ + * Γ |- Γ + */ + case RewriteTrue(s) => + val truth = Sequent(Set(), Set(PredicateFormula(top, Nil))) + if (isSameSequent(s, truth)) SCValidProof(SCProof(step)) else SCInvalidProof(SCProof(step), Nil, s"The desired conclusion is not a trivial tautology") /* * * -------------- @@ -480,7 +485,7 @@ object SCProofChecker { /** * Verifies if a given pure SequentCalculus is conditionally correct, as the imported sequents are assumed. - * If the proof is not correct, the functrion will report the faulty line and a brief explanation. + * If the proof is not correct, the function will report the faulty line and a brief explanation. * * @param proof A SC proof to check * @return SCValidProof(SCProof(step)) if the proof is correct, else SCInvalidProof with the path to the incorrect proof step diff --git a/lisa-kernel/src/main/scala/lisa/kernel/proof/SequentCalculus.scala b/lisa-kernel/src/main/scala/lisa/kernel/proof/SequentCalculus.scala index 75c34935b10dd1b2c741f74cadd43d4111ac2839..16fa26efe3019d0158ddf4a2b9dc5e371cb2c247 100644 --- a/lisa-kernel/src/main/scala/lisa/kernel/proof/SequentCalculus.scala +++ b/lisa-kernel/src/main/scala/lisa/kernel/proof/SequentCalculus.scala @@ -54,11 +54,20 @@ object SequentCalculus { * <pre> * Γ |- Δ * ------------ - * Γ |- Δ + * Γ |- Δ (OCBSL rewrite) * </pre> */ case class Rewrite(bot: Sequent, t1: Int) extends SCProofStep { val premises = Seq(t1) } + /** + * <pre> + * + * ------------ + * Γ |- Γ (OCBSL tautology) + * </pre> + */ + case class RewriteTrue(bot: Sequent) extends SCProofStep { val premises = Seq() } + /** * <pre> * @@ -227,7 +236,7 @@ object SequentCalculus { */ case class RightExistsOne(bot: Sequent, t1: Int, phi: Formula, x: VariableLabel) extends SCProofStep { val premises = Seq(t1) } - // Structural rules + // Structural rule /** * <pre> * Γ |- Δ @@ -292,6 +301,7 @@ object SequentCalculus { */ case class RightSubstIff(bot: Sequent, t1: Int, equals: List[(Formula, Formula)], lambdaPhi: LambdaFormulaFormula) extends SCProofStep { val premises = Seq(t1) } + // Rules for schemas /** * <pre> * Γ |- Δ @@ -308,9 +318,17 @@ object SequentCalculus { * Γ[ψ(a)/?p] |- Δ[ψ(a)/?p] * </pre> */ - case class InstPredSchema(bot: Sequent, t1: Int, insts: Map[SchematicPredicateLabel, LambdaTermFormula]) extends SCProofStep { val premises = Seq(t1) } + case class InstPredSchema(bot: Sequent, t1: Int, insts: Map[SchematicVarOrPredLabel, LambdaTermFormula]) extends SCProofStep { val premises = Seq(t1) } // Proof Organisation rules + + /** + * Encapsulate a proof into a single step. The imports of the subproof correspond to the premisces of the step. + * @param sp The encapsulated subproof. + * @param premises The indices of steps on the outside proof that are equivalent to the import of the subproof. + * @param display A boolean value indicating whether the subproof needs to be expanded when printed. Should probably go and + * be replaced by encapsulation. + */ case class SCSubproof(sp: SCProof, premises: Seq[Int] = Seq.empty, display: Boolean = true) extends SCProofStep { // premises is a list of ints similar to t1, t2... that verifies that imports of the subproof sp are justified by previous steps. val bot: Sequent = sp.conclusion diff --git a/src/main/scala/lisa/proven/Main.scala b/lisa-theories/src/main/scala/lisa/Main.scala similarity index 78% rename from src/main/scala/lisa/proven/Main.scala rename to lisa-theories/src/main/scala/lisa/Main.scala index bf01e93e0444e24ae0943b88255ee9607788c585..f4e9dcfdcf3a2ba80cd4239fce6f4f2002bda776 100644 --- a/src/main/scala/lisa/proven/Main.scala +++ b/lisa-theories/src/main/scala/lisa/Main.scala @@ -1,15 +1,17 @@ -package lisa.proven +package lisa /** * The parent trait of all theory files containing mathematical development */ trait Main { - export lisa.proven.SetTheoryLibrary.{*, given} + export lisa.settheory.SetTheoryLibrary.{_, given} private val realOutput: String => Unit = println private var outString: List[String] = List() private val lineBreak = "\n" + given output: (String => Unit) = s => outString = lineBreak :: s :: outString + given finishOutput: (Throwable => Nothing) = e => { main(Array[String]()) throw e @@ -18,6 +20,8 @@ trait Main { /** * This specific implementation make sure that what is "shown" in theory files is only printed for the one we run, and not for the whole library. */ - def main(args: Array[String]): Unit = { realOutput(outString.reverse.mkString("")) } + def main(args: Array[String]): Unit = { + realOutput(outString.reverse.mkString("")) + } } diff --git a/lisa-theories/src/main/scala/lisa/settheory/SetTheoryDefinitions.scala b/lisa-theories/src/main/scala/lisa/settheory/SetTheoryDefinitions.scala index 20263c2a4fed1a8754b51c305eb13aa0d05a63ef..4a9c48923083b6653116b105548b9aa321206cc0 100644 --- a/lisa-theories/src/main/scala/lisa/settheory/SetTheoryDefinitions.scala +++ b/lisa-theories/src/main/scala/lisa/settheory/SetTheoryDefinitions.scala @@ -13,22 +13,60 @@ private[settheory] trait SetTheoryDefinitions { def axioms: Set[(String, Formula)] = Set.empty // Predicates + /** + * The symbol for the set membership predicate. + */ final val in = ConstantPredicateLabel("set_membership", 2) + + /** + * The symbol for the subset predicate. + */ final val subset = ConstantPredicateLabel("subset_of", 2) + + /** + * The symbol for the equicardinality predicate. Needed for Tarski's axiom. + */ final val sim = ConstantPredicateLabel("same_cardinality", 2) // Equicardinality + /** + * Set Theory basic predicates + */ final val predicates = Set(in, subset, sim) - // val application - // val pick + // val choice // Functions + /** + * The symbol for the empty set constant. + */ final val emptySet = ConstantFunctionLabel("empty_set", 0) + + /** + * The symbol for the unordered pair function. + */ final val pair = ConstantFunctionLabel("unordered_pair", 2) - final val singleton = ConstantFunctionLabel("singleton", 1) + + /** + * The symbol for the powerset function. + */ final val powerSet = ConstantFunctionLabel("power_set", 1) + + /** + * The symbol for the set union function. + */ final val union = ConstantFunctionLabel("union", 1) + + /** + * The symbol for the universe function. Defined in TG set theory. + */ final val universe = ConstantFunctionLabel("universe", 1) - final val functions = Set(emptySet, pair, singleton, powerSet, union, universe) + /** + * Set Theory basic functions. + */ + final val functions = Set(emptySet, pair, powerSet, union, universe) + + /** + * The kernel theory loaded with Set Theory symbols and axioms. + */ val runningSetTheory: RunningTheory = new RunningTheory() // given RunningTheory = runningSetTheory diff --git a/src/main/scala/lisa/proven/SetTheoryLibrary.scala b/lisa-theories/src/main/scala/lisa/settheory/SetTheoryLibrary.scala similarity index 66% rename from src/main/scala/lisa/proven/SetTheoryLibrary.scala rename to lisa-theories/src/main/scala/lisa/settheory/SetTheoryLibrary.scala index 8045393e04a6df2fef2908d137d31c81a01f8868..b127bbeaa2fdfe59dacfdefb1b89cfc947e90dbb 100644 --- a/src/main/scala/lisa/proven/SetTheoryLibrary.scala +++ b/lisa-theories/src/main/scala/lisa/settheory/SetTheoryLibrary.scala @@ -1,9 +1,11 @@ -package lisa.proven +package lisa.settheory + +import lisa.utils.Library /** * Specific implementation of [[utilities.Library]] for Set Theory, with a RunningTheory that is supposed to be used by the standard library. */ -object SetTheoryLibrary extends lisa.utils.Library(lisa.settheory.AxiomaticSetTheory.runningSetTheory) { +object SetTheoryLibrary extends Library(lisa.settheory.AxiomaticSetTheory.runningSetTheory) { val AxiomaticSetTheory: lisa.settheory.AxiomaticSetTheory.type = lisa.settheory.AxiomaticSetTheory export AxiomaticSetTheory.* diff --git a/lisa-theories/src/main/scala/lisa/settheory/SetTheoryZAxioms.scala b/lisa-theories/src/main/scala/lisa/settheory/SetTheoryZAxioms.scala index 824498f5292d836a3528bca2e188fae9a64c81f5..b60d2f556e578474cb5d11884c1ba1b5a1c6c174 100644 --- a/lisa-theories/src/main/scala/lisa/settheory/SetTheoryZAxioms.scala +++ b/lisa-theories/src/main/scala/lisa/settheory/SetTheoryZAxioms.scala @@ -11,13 +11,13 @@ private[settheory] trait SetTheoryZAxioms extends SetTheoryDefinitions { private val (x, y, z) = (VariableLabel("x"), VariableLabel("y"), VariableLabel("z")) - private final val sPhi = SchematicNPredicateLabel("P", 2) + private final val sPhi = SchematicPredicateLabel("P", 2) final val emptySetAxiom: Formula = forall(x, !in(x, emptySet())) final val extensionalityAxiom: Formula = forall(x, forall(y, forall(z, in(z, x) <=> in(z, y)) <=> (x === y))) + final val subsetAxiom: Formula = forall(x, forall(y, subset(x, y) <=> forall(z, in(z, x) ==> in(z, y)))) final val pairAxiom: Formula = forall(x, forall(y, forall(z, in(z, pair(x, y)) <=> (x === z) \/ (y === z)))) final val unionAxiom: Formula = forall(x, forall(z, in(x, union(z)) <=> exists(y, in(x, y) /\ in(y, z)))) - final val subsetAxiom: Formula = forall(x, forall(y, subset(x, y) <=> forall(z, (in(z, x) ==> in(z, y))))) final val powerAxiom: Formula = forall(x, forall(y, in(x, powerSet(y)) <=> subset(x, y))) final val foundationAxiom: Formula = forall(x, !(x === emptySet()) ==> exists(y, in(y, x) /\ forall(z, in(z, x) ==> !in(z, y)))) diff --git a/lisa-theories/src/main/scala/lisa/settheory/SetTheoryZFAxioms.scala b/lisa-theories/src/main/scala/lisa/settheory/SetTheoryZFAxioms.scala index e075fabbb9f5fe687ef71cfe859c1c8b5691a51c..bd95ad4551a1c40f9cf94799a63cba71e1a93a06 100644 --- a/lisa-theories/src/main/scala/lisa/settheory/SetTheoryZFAxioms.scala +++ b/lisa-theories/src/main/scala/lisa/settheory/SetTheoryZFAxioms.scala @@ -9,7 +9,7 @@ import lisa.utils.Helpers.{_, given} private[settheory] trait SetTheoryZFAxioms extends SetTheoryZAxioms { private val (x, y, a, b) = (VariableLabel("x"), VariableLabel("y"), VariableLabel("A"), VariableLabel("B")) - private final val sPsi = SchematicNPredicateLabel("P", 3) + private final val sPsi = SchematicPredicateLabel("P", 3) final val replacementSchema: Formula = forall( a, diff --git a/lisa-tptp/src/main/scala/lisa/tptp/KernelParser.scala b/lisa-tptp/src/main/scala/lisa/tptp/KernelParser.scala index 66455c198a822c7ee4c740b410656da5e83d49e4..f75368daf8a76de123afb04416d9d843a1fcadf8 100644 --- a/lisa-tptp/src/main/scala/lisa/tptp/KernelParser.scala +++ b/lisa-tptp/src/main/scala/lisa/tptp/KernelParser.scala @@ -74,7 +74,7 @@ object KernelParser { * @return the same term in LISA */ def convertTermToKernel(term: CNF.Term): K.Term = term match { - case CNF.AtomicTerm(f, args) => K.FunctionTerm(K.ConstantFunctionLabel(f, args.size), args map convertTermToKernel) + case CNF.AtomicTerm(f, args) => K.Term(K.ConstantFunctionLabel(f, args.size), args map convertTermToKernel) case CNF.Variable(name) => K.VariableTerm(K.VariableLabel(name)) case CNF.DistinctObject(name) => ??? } @@ -85,7 +85,7 @@ object KernelParser { */ def convertTermToKernel(term: FOF.Term): K.Term = term match { case FOF.AtomicTerm(f, args) => - K.FunctionTerm(K.ConstantFunctionLabel(f, args.size), args map convertTermToKernel) + K.Term(K.ConstantFunctionLabel(f, args.size), args map convertTermToKernel) case FOF.Variable(name) => K.VariableTerm(K.VariableLabel(name)) case FOF.DistinctObject(name) => ??? case FOF.NumberTerm(value) => ??? diff --git a/lisa-utils/src/main/scala/lisa/utils/KernelHelpers.scala b/lisa-utils/src/main/scala/lisa/utils/KernelHelpers.scala index 6a2c7478e8f4d9e41595b983809d0421a66791af..56981317d898906782d135b24b9458fdcdfd8f58 100644 --- a/lisa-utils/src/main/scala/lisa/utils/KernelHelpers.scala +++ b/lisa-utils/src/main/scala/lisa/utils/KernelHelpers.scala @@ -5,7 +5,7 @@ import lisa.kernel.proof.RunningTheoryJudgement import lisa.kernel.proof.RunningTheoryJudgement.InvalidJustification import lisa.kernel.proof.SCProof import lisa.kernel.proof.SCProofCheckerJudgement.SCInvalidProof -import lisa.kernel.proof.SequentCalculus._ +import lisa.kernel.proof.SequentCalculus.* /** * A helper file that provides various syntactic sugars for LISA's FOL and proofs. Best imported through utilities.Helpers @@ -26,6 +26,8 @@ trait KernelHelpers { /* Prefix syntax */ def neg(f: Formula): Formula = ConnectorFormula(Neg, Seq(f)) + def and(list: Formula*): Formula = ConnectorFormula(And, list) + def or(list: Formula*): Formula = ConnectorFormula(Or, list) def and(l: Formula, r: Formula): Formula = ConnectorFormula(And, Seq(l, r)) def or(l: Formula, r: Formula): Formula = ConnectorFormula(Or, Seq(l, r)) def implies(l: Formula, r: Formula): Formula = ConnectorFormula(Implies, Seq(l, r)) @@ -39,7 +41,7 @@ trait KernelHelpers { extension (label: ConnectorLabel) def apply(args: Formula*): Formula = ConnectorFormula(label, args) - extension (label: FunctionLabel) def apply(args: Term*): Term = FunctionTerm(label, args) + extension (label: TermLabel) def apply(args: Term*): Term = Term(label, args) extension (label: BinderLabel) def apply(bound: VariableLabel, inner: Formula): Formula = BinderFormula(label, bound, inner) @@ -88,13 +90,11 @@ trait KernelHelpers { /* Conversions */ - given Conversion[VariableLabel, VariableTerm] = VariableTerm.apply - given Conversion[VariableTerm, VariableLabel] = _.label + given Conversion[VariableLabel, Term] = Term(_, Seq()) + given Conversion[Term, TermLabel] = _.label given Conversion[PredicateFormula, PredicateLabel] = _.label given Conversion[PredicateLabel, Formula] = _.apply() - given Conversion[FunctionTerm, FunctionLabel] = _.label - given Conversion[SchematicFunctionLabel, Term] = _.apply() - given Conversion[VariableFormulaLabel, PredicateFormula] = PredicateFormula.apply(_, Nil) + given Conversion[VariableFormulaLabel, PredicateFormula] = PredicateFormula(_, Nil) given Conversion[(Boolean, List[Int], String), Option[(List[Int], String)]] = tr => if (tr._1) None else Some(tr._2, tr._3) given Conversion[Formula, Sequent] = () |- _ @@ -148,7 +148,7 @@ trait KernelHelpers { extension [A, T1 <: A](left: T1)(using SetConverter[Formula, T1]) infix def |-[B, T2 <: B](right: T2)(using SetConverter[Formula, T2]): Sequent = Sequent(any2set(left), any2set(right)) - def instantiatePredicateSchemaInSequent(s: Sequent, m: Map[SchematicPredicateLabel, LambdaTermFormula]): Sequent = { + def instantiatePredicateSchemaInSequent(s: Sequent, m: Map[SchematicVarOrPredLabel, LambdaTermFormula]): Sequent = { s.left.map(phi => instantiatePredicateSchemas(phi, m)) |- s.right.map(phi => instantiatePredicateSchemas(phi, m)) } def instantiateFunctionSchemaInSequent(s: Sequent, m: Map[SchematicTermLabel, LambdaTermTerm]): Sequent = { @@ -156,6 +156,11 @@ trait KernelHelpers { } extension (sp: SCSubproof) { + + /** + * Explore a proof with a specific path and returns the pointed proofstep. + * @param path A path through subproofs of a proof. + */ def followPath(path: Seq[Int]): SCProofStep = path match { case Nil => sp case n :: Nil => sp.sp(n) diff --git a/lisa-utils/src/main/scala/lisa/utils/Library.scala b/lisa-utils/src/main/scala/lisa/utils/Library.scala index 1cc2db84bce30730c8ac9a0ae85b7bebbd436cde..cfcb6846d43d5f1d85ca03192ea0d9bcc21a7c4c 100644 --- a/lisa-utils/src/main/scala/lisa/utils/Library.scala +++ b/lisa-utils/src/main/scala/lisa/utils/Library.scala @@ -13,7 +13,7 @@ abstract class Library(val theory: RunningTheory) { export lisa.kernel.proof.SequentCalculus.* export lisa.kernel.proof.SCProof as Proof export theory.{Justification, Theorem, Definition, Axiom, PredicateDefinition, FunctionDefinition} - export lisa.utils.Helpers.{*, given} + export lisa.utils.Helpers.{_, given} import lisa.kernel.proof.RunningTheoryJudgement as Judgement /** @@ -85,7 +85,7 @@ abstract class Library(val theory: RunningTheory) { case Judgement.ValidJustification(just) => last = Some(just) just - case wrongJudgement: Judgement.InvalidJustification[_] => wrongJudgement.showAndGet + case wrongJudgement: Judgement.InvalidJustification[?] => wrongJudgement.showAndGet } /** @@ -101,7 +101,7 @@ abstract class Library(val theory: RunningTheory) { */ def simpleDefinition(symbol: String, expression: LambdaTermTerm): Judgement[theory.FunctionDefinition] = { val LambdaTermTerm(vars, body) = expression - val out: VariableLabel = VariableLabel(freshId((vars.map(_.id) ++ body.schematicTerms.map(_.id)).toSet, "y")) + val out: VariableLabel = VariableLabel(freshId((vars.map(_.id) ++ body.schematicTermLabels.map(_.id)).toSet, "y")) val proof: Proof = simpleFunctionDefinition(expression, out) theory.functionDefinition(symbol, LambdaTermFormula(vars, out === body), out, proof, Nil) } @@ -135,7 +135,7 @@ abstract class Library(val theory: RunningTheory) { case Judgement.ValidJustification(just) => last = Some(just) just - case wrongJudgement: Judgement.InvalidJustification[_] => wrongJudgement.showAndGet + case wrongJudgement: Judgement.InvalidJustification[?] => wrongJudgement.showAndGet } definition.label } @@ -148,7 +148,7 @@ abstract class Library(val theory: RunningTheory) { case Judgement.ValidJustification(just) => last = Some(just) just - case wrongJudgement: Judgement.InvalidJustification[_] => wrongJudgement.showAndGet + case wrongJudgement: Judgement.InvalidJustification[?] => wrongJudgement.showAndGet } definition.label } @@ -194,7 +194,7 @@ abstract class Library(val theory: RunningTheory) { case Judgement.ValidJustification(just) => last = Some(just) just - case wrongJudgement: Judgement.InvalidJustification[_] => wrongJudgement.showAndGet + case wrongJudgement: Judgement.InvalidJustification[?] => wrongJudgement.showAndGet } definition.label } @@ -211,7 +211,7 @@ abstract class Library(val theory: RunningTheory) { def DEFINE(symbol: String, vars: VariableLabel*): FunSymbolDefine = FunSymbolDefine(symbol, vars) /** - * For a definition of the type f(x) := term, construct the required proof ∃!y. y = term. + * For a definition of the type f(x) := term, construct the required proof ?!y. y = term. */ private def simpleFunctionDefinition(expression: LambdaTermTerm, out: VariableLabel): Proof = { val x = out diff --git a/lisa-utils/src/main/scala/lisa/utils/Parser.scala b/lisa-utils/src/main/scala/lisa/utils/Parser.scala index 65bcb14503d1020919b22a86faebaf74a208a101..1cb6546d4946d67c7b51f9b30f89eca1d82bf7fe 100644 --- a/lisa-utils/src/main/scala/lisa/utils/Parser.scala +++ b/lisa-utils/src/main/scala/lisa/utils/Parser.scala @@ -98,7 +98,7 @@ object Parser { .printTerm(t) .getOrElse({ t match { - case FunctionTerm(_, args) => args.foreach(printTerm) + case Term(_, args) => args.foreach(printTerm) case VariableTerm(_) => () } throw PrintFailedException(t) @@ -286,7 +286,7 @@ object Parser { def invertTerm(t: Term): Token ~ Option[Seq[Term]] = t match { case VariableTerm(label) => SchematicToken(label.id) ~ None - case FunctionTerm(label, args) => + case Term(label, args) => val optArgs = args match { case Seq() => None case _ => Some(args) @@ -302,8 +302,8 @@ object Parser { { case ConstantToken(id) ~ maybeArgs => val args = maybeArgs.getOrElse(Seq()) - FunctionTerm(ConstantFunctionLabel(id, args.length), args) - case SchematicToken(id) ~ Some(args) => FunctionTerm(SchematicFunctionLabel(id, args.length), args) + Term(ConstantFunctionLabel(id, args.length), args) + case SchematicToken(id) ~ Some(args) => Term(SchematicFunctionLabel(id, args.length), args) case SchematicToken(id) ~ None => VariableTerm(VariableLabel(id)) case _ => throw UnreachableException }, @@ -317,12 +317,12 @@ object Parser { lazy val simpleFormula: Syntax[Formula] = predicate.up[Formula] | negated.up[Formula] | bool.up[Formula] lazy val subformula: Syntax[Formula] = simpleFormula | open.skip ~ formula ~ closed.skip - def createFunctionTerm(label: Token, args: Seq[Term]): Term = label match { - case ConstantToken(id) => FunctionTerm(ConstantFunctionLabel(id, args.size), args) + def createTerm(label: Token, args: Seq[Term]): Term = label match { + case ConstantToken(id) => Term(ConstantFunctionLabel(id, args.size), args) case SchematicToken(id) => args match { case Seq() => VariableTerm(VariableLabel(id)) - case _ => FunctionTerm(SchematicFunctionLabel(id, args.size), args) + case _ => Term(SchematicFunctionLabel(id, args.size), args) } case _ => throw UnreachableException } @@ -345,13 +345,13 @@ object Parser { case ConstantToken(id) ~ maybeArgs ~ None => val args = maybeArgs.getOrElse(Seq()) PredicateFormula(ConstantPredicateLabel(id, args.size), args) - case SchematicToken(id) ~ Some(args) ~ None => PredicateFormula(SchematicNPredicateLabel(id, args.size), args) + case SchematicToken(id) ~ Some(args) ~ None => PredicateFormula(SchematicPredicateLabel(id, args.size), args) case SchematicToken(id) ~ None ~ None => PredicateFormula(VariableFormulaLabel(id), Seq()) // equality of two function applications case fun1 ~ args1 ~ Some(fun2 ~ args2) => - PredicateFormula(FOL.equality, Seq(createFunctionTerm(fun1, args1.getOrElse(Seq())), createFunctionTerm(fun2, args2.getOrElse(Seq())))) + PredicateFormula(FOL.equality, Seq(createTerm(fun1, args1.getOrElse(Seq())), createTerm(fun2, args2.getOrElse(Seq())))) case _ => throw UnreachableException }, @@ -367,7 +367,7 @@ object Parser { case ConstantPredicateLabel(id, 0) => ConstantToken(id) ~ None case ConstantPredicateLabel(id, _) => ConstantToken(id) ~ Some(args) case VariableFormulaLabel(id) => SchematicToken(id) ~ None - case SchematicNPredicateLabel(id, _) => SchematicToken(id) ~ Some(args) + case SchematicPredicateLabel(id, _) => SchematicToken(id) ~ Some(args) } Seq(predicateApp ~ None) } diff --git a/lisa-utils/src/main/scala/lisa/utils/ProofsShrink.scala b/lisa-utils/src/main/scala/lisa/utils/ProofsShrink.scala new file mode 100644 index 0000000000000000000000000000000000000000..c9ada08f260b1a4e53db994e5e68fd965f3aa559 --- /dev/null +++ b/lisa-utils/src/main/scala/lisa/utils/ProofsShrink.scala @@ -0,0 +1,296 @@ +package lisa.utils + +import lisa.kernel.fol.FOL.* +import lisa.kernel.proof.SCProof +import lisa.kernel.proof.SequentCalculus.* + +/** + * Utilities to work with sequent-calculus proofs. + * All of these methods assume that the provided proof are well-formed but not necessarily valid. + * If the provided proofs are valid, then the resulting proofs will also be valid. + */ +object ProofsShrink { + + /** + * Computes the size of a proof. Size corresponds to the number of proof steps. + * Subproofs are count as one plus the size of their body. + * @param proof the proof to analyze + * @return the size of that proof + */ + def proofSize(proof: SCProof): Int = + proof.steps.map { + case SCSubproof(sp, _, _) => 1 + proofSize(sp) + case _ => 1 + }.sum + + /** + * Computes the depth of a proof. Depth corresponds to the maximum number of nested subproofs plus one. + * @param proof the proof to analyze + * @return the depth of that proof + */ + def proofDepth(proof: SCProof): Int = + proof.steps.map { + case SCSubproof(sp, _, _) => 1 + proofDepth(sp) + case _ => 1 + }.max + + /** + * Updates the indices of the premises in a proof step according to some provided mapping. For example: + * <pre> + * mapPremises(Rewrite(sequent, 1), i => i + 1) == Rewrite(sequent, 2) + * </pre> + * @param step the proof step to update + * @param mapping the provided mapping + * @return a new step with the updated indices + */ + def mapPremises(step: SCProofStep, mapping: Int => Int): SCProofStep = step match { + case s: Rewrite => s.copy(t1 = mapping(s.t1)) + case s: RewriteTrue => s + case s: Hypothesis => s + case s: Cut => s.copy(t1 = mapping(s.t1), t2 = mapping(s.t2)) + case s: LeftAnd => s.copy(t1 = mapping(s.t1)) + case s: LeftOr => s.copy(t = s.t.map(mapping)) + case s: LeftImplies => s.copy(t1 = mapping(s.t1), t2 = mapping(s.t2)) + case s: LeftIff => s.copy(t1 = mapping(s.t1)) + case s: LeftNot => s.copy(t1 = mapping(s.t1)) + case s: LeftForall => s.copy(t1 = mapping(s.t1)) + case s: LeftExists => s.copy(t1 = mapping(s.t1)) + case s: LeftExistsOne => s.copy(t1 = mapping(s.t1)) + case s: RightAnd => s.copy(t = s.t.map(mapping)) + case s: RightOr => s.copy(t1 = mapping(s.t1)) + case s: RightImplies => s.copy(t1 = mapping(s.t1)) + case s: RightIff => s.copy(t1 = mapping(s.t1), t2 = mapping(s.t2)) + case s: RightNot => s.copy(t1 = mapping(s.t1)) + case s: RightForall => s.copy(t1 = mapping(s.t1)) + case s: RightExists => s.copy(t1 = mapping(s.t1)) + case s: RightExistsOne => s.copy(t1 = mapping(s.t1)) + case s: Weakening => s.copy(t1 = mapping(s.t1)) + case s: LeftRefl => s.copy(t1 = mapping(s.t1)) + case s: RightRefl => s + case s: LeftSubstEq => s.copy(t1 = mapping(s.t1)) + case s: RightSubstEq => s.copy(t1 = mapping(s.t1)) + case s: LeftSubstIff => s.copy(t1 = mapping(s.t1)) + case s: RightSubstIff => s.copy(t1 = mapping(s.t1)) + case s: SCSubproof => s.copy(premises = s.premises.map(mapping)) + case s: InstFunSchema => s.copy(t1 = mapping(s.t1)) + case s: InstPredSchema => s.copy(t1 = mapping(s.t1)) + } + + /** + * Flattens a proof recursively; in other words it removes all occurrences of [[SCSubproof]]. + * Because subproofs imports can be rewritten, [[Rewrite]] steps may be inserted where that is necessary. + * The order of proof steps is preserved, indices of premises are adapted to reflect the new sequence. + * @param proof the proof to be flattened + * @return the flattened proof + */ + def flattenProof(proof: SCProof): SCProof = { + def flattenProofRecursive(steps: IndexedSeq[SCProofStep], topPremises: IndexedSeq[(Int, Sequent)], offset: Int): IndexedSeq[SCProofStep] = { + val (finalAcc, _) = steps.foldLeft((IndexedSeq.empty[SCProofStep], IndexedSeq.empty[(Int, Sequent)])) { case ((acc, localToGlobal), step) => + def resolve(i: Int): (Int, Sequent) = if (i >= 0) localToGlobal(i) else topPremises(-i - 1) + val newAcc = step match { + case SCSubproof(subProof, subPremises, _) => + val (rewrittenPremises, rewrittenSeq) = subPremises.zipWithIndex.flatMap { case (i, j) => + val (k, sequent) = resolve(i) + val imported = subProof.imports(j) + if (sequent != imported) { + Some((Rewrite(imported, k), -(j - 1) -> imported)) + } else { + None + } + }.unzip + val rewrittenMap = rewrittenSeq.zipWithIndex.map { case ((i, sequent), j) => i -> (offset + acc.size + j, sequent) }.toMap + val childTopPremises = subPremises.map(i => rewrittenMap.getOrElse(i, resolve(i))).toIndexedSeq + acc ++ rewrittenPremises ++ flattenProofRecursive(subProof.steps, childTopPremises, offset + acc.size + rewrittenPremises.size) + case _ => + acc :+ mapPremises(step, i => resolve(i)._1) + } + (newAcc, localToGlobal :+ (offset + newAcc.size - 1, step.bot)) + } + finalAcc + } + SCProof(flattenProofRecursive(proof.steps, proof.imports.zipWithIndex.map { case (imported, i) => (-i - 1, imported) }, 0), proof.imports) + } + + /** + * Eliminates all steps that are not indirectly referenced by the conclusion (last step) of the proof. + * This procedure is applied recursively on all subproofs. The elimination of unused top-level imports can be configured. + * The order of proof steps is preserved, indices of premises are adapted to reflect the new sequence. + * @param proof the proof to be simplified + * @param eliminateTopLevelDeadImports whether the unused top-level imports should be eliminated as well + * @return the proof with dead steps eliminated + */ + def deadStepsElimination(proof: SCProof, eliminateTopLevelDeadImports: Boolean = true): SCProof = { + def deadStepsEliminationInternal(proof: SCProof, eliminateDeadImports: Boolean): (SCProof, IndexedSeq[Int]) = { + // We process the leaves first, otherwise we could miss dead branches (subproofs that more imports than necessary) + val processedSteps = proof.steps.map { + case SCSubproof(sp, premises, display) => + val (newSubProof, newImportsIndices) = deadStepsEliminationInternal(sp, true) + SCSubproof(newSubProof, newImportsIndices.map(premises), display) + case other => other + } + val graph = processedSteps.map(_.premises) + val nodes = graph.indices + def bfs(visited: Set[Int], toVisit: Set[Int]): Set[Int] = { + if (toVisit.nonEmpty) { + val next = toVisit.flatMap(graph).diff(visited) + bfs(visited ++ next, next.filter(_ >= 0)) + } else { + visited + } + } + val conclusionNode = nodes.last // Must exist by assumption + val visited = bfs(Set(conclusionNode), Set(conclusionNode)) + val newNodes = nodes.filter(visited.contains) + val newImports = proof.imports.indices.map(i => -(i + 1)).filter(i => !eliminateDeadImports || visited.contains(i)) + val newImportsIndices = newImports.map(i => -(i + 1)) + val oldToNewStep = newNodes.zipWithIndex.toMap + val oldToNewImport = newImports.zipWithIndex.map { case (i, j) => (i, -(j + 1)) }.toMap + val map = oldToNewStep ++ oldToNewImport + val newSteps = newNodes.map(processedSteps).map(step => mapPremises(step, map)) + val newProof = SCProof(newSteps, newImportsIndices.map(proof.imports)) + (newProof, newImportsIndices) + } + val (newProof, _) = deadStepsEliminationInternal(proof, eliminateTopLevelDeadImports) + newProof + } + + /** + * Removes proof steps that are identified to be redundant. The registered simplifications are the following: + * <ul> + * <li>Double/fruitless rewrites/weakening</li> + * <li>Fruitless instantiations</li> + * <li>Useless cut</li> + * </ul> + * This procedure may need to be called several times; it is guaranteed that a fixed point will eventually be reached. + * Imports will not change, dead branches will be preserved (but can still be simplified). + * @param proof the proof to be simplified + * @return the simplified proof + */ + def simplifyProof(proof: SCProof): SCProof = { + def isSequentSubset(subset: Sequent, superset: Sequent): Boolean = + isSubset(subset.left, superset.left) && isSubset(subset.right, superset.right) + def schematicPredicatesLabels(sequent: Sequent): Set[SchematicVarOrPredLabel] = + (sequent.left ++ sequent.right).flatMap(_.schematicPredicateLabels) + def schematicTermLabels(sequent: Sequent): Set[SchematicTermLabel] = + (sequent.left ++ sequent.right).flatMap(_.schematicTermLabels) + def freeSchematicTerms(sequent: Sequent): Set[SchematicTermLabel] = + (sequent.left ++ sequent.right).flatMap(_.freeSchematicTermLabels) + val (newSteps, _) = proof.steps.zipWithIndex.foldLeft((IndexedSeq.empty[SCProofStep], IndexedSeq.empty[Int])) { case ((acc, map), (oldStep, i)) => + def resolveLocal(j: Int): Int = { + require(j < i) + if (j >= 0) map(j) else j + } + def getSequentLocal(j: Int): Sequent = { + require(j < i) + if (j >= 0) acc(map(j)).bot else proof.getSequent(j) + } + object LocalStep { + def unapply(j: Int): Option[SCProofStep] = { + require(j < i) + if (j >= 0) Some(acc(map(j))) else None + } + } + val step = mapPremises(oldStep, resolveLocal) + val either: Either[SCProofStep, Int] = step match { + // General unary steps + case _ if step.premises.sizeIs == 1 && getSequentLocal(step.premises.head) == step.bot => + Right(step.premises.head) + case _ if !step.isInstanceOf[Rewrite] && step.premises.sizeIs == 1 && isSameSequent(getSequentLocal(step.premises.head), step.bot) => + Left(Rewrite(step.bot, step.premises.head)) + case _ + if !step.isInstanceOf[Rewrite] && !step.isInstanceOf[Weakening] + && step.premises.sizeIs == 1 && isSequentSubset(getSequentLocal(step.premises.head), step.bot) => + Left(Weakening(step.bot, step.premises.head)) + // Recursive + case SCSubproof(sp, premises, display) => + Left(SCSubproof(simplifyProof(sp), premises, display)) + // Double rewrite + case Rewrite(bot1, LocalStep(Rewrite(bot2, t2))) if isSameSequent(bot1, bot2) => + Left(Rewrite(bot1, t2)) + // Double weakening + case Weakening(bot1, LocalStep(Weakening(bot2, t2))) if isSequentSubset(bot2, bot1) => + Left(Weakening(bot1, t2)) + // Rewrite and weakening + case Weakening(bot1, LocalStep(Rewrite(_, t2))) if isSequentSubset(getSequentLocal(t2), bot1) => + Left(Weakening(bot1, t2)) + // Weakening and rewrite + case Rewrite(bot1, LocalStep(Weakening(_, t2))) if isSequentSubset(getSequentLocal(t2), bot1) => + Left(Weakening(bot1, t2)) + // Hypothesis and rewrite + case Rewrite(bot1, LocalStep(Hypothesis(_, phi))) if bot1.left.contains(phi) && bot1.right.contains(phi) => + Left(Hypothesis(bot1, phi)) + // Hypothesis and weakening + case Weakening(bot1, LocalStep(Hypothesis(_, phi))) if bot1.left.contains(phi) && bot1.right.contains(phi) => + Left(Hypothesis(bot1, phi)) + // Useless cut + case Cut(bot, _, t2, phi) if bot.left.contains(phi) => + Left(Weakening(bot, t2)) + case Cut(bot, t1, _, phi) if bot.right.contains(phi) => + Left(Weakening(bot, t1)) + // Fruitless instantiation + case InstPredSchema(bot, t1, _) if isSameSequent(bot, getSequentLocal(t1)) => + Left(Rewrite(bot, t1)) + case InstFunSchema(bot, t1, _) if isSameSequent(bot, getSequentLocal(t1)) => + Left(Rewrite(bot, t1)) + // Instantiation simplification + case InstPredSchema(bot, t1, insts) if !insts.keySet.subsetOf(schematicPredicatesLabels(getSequentLocal(t1))) => + val newInsts = insts -- insts.keySet.diff(schematicPredicatesLabels(getSequentLocal(t1))) + Left(InstPredSchema(bot, t1, newInsts)) + case InstFunSchema(bot, t1, insts) if !insts.keySet.subsetOf(schematicTermLabels(getSequentLocal(t1))) => + val newInsts = insts -- insts.keySet.diff(schematicTermLabels(getSequentLocal(t1))) + Left(InstFunSchema(bot, t1, newInsts)) + case other => Left(other) + } + either match { + case Left(newStep) => (acc :+ newStep, map :+ acc.size) + case Right(index) => (acc :+ oldStep, map :+ index) + } + } + SCProof(newSteps, proof.imports) + } + + /** + * Attempts to factor the premises such that the first occurrence of a proven sequent is used. + * This procedure is greedy. + * Unused proof steps will not be removed. Use [[deadStepsElimination]] for that. + * @param proof the proof to be factored + * @return the factored proof + */ + def factorProof(proof: SCProof): SCProof = { + val (initialMap, initialCache) = proof.imports.zipWithIndex.foldLeft((Map.empty[Int, Int], Map.empty[Sequent, Int])) { case ((map, cache), (sequent, i)) => + val originalIndex = -(i + 1) + cache.get(sequent) match { + case Some(existingIndex) => (map + (originalIndex -> existingIndex), cache) + case None => (map + (originalIndex -> originalIndex), cache + (sequent -> originalIndex)) + } + } + val (newSteps, _, _) = proof.steps.zipWithIndex.foldLeft((IndexedSeq.empty[SCProofStep], initialMap, initialCache)) { case ((acc, map, cache), (step, i)) => + val sequent = step.bot + val mappedStep = mapPremises(step, map) match { + case SCSubproof(sp, premises, display) => + SCSubproof(factorProof(sp), premises, display) + case other => other + } + val (newMap, newCache) = cache.get(sequent) match { + case Some(existingIndex) => (map + (i -> existingIndex), cache) + case None => (map + (i -> i), cache + (sequent -> i)) + } + (acc :+ mappedStep, newMap, newCache) + } + SCProof(newSteps, proof.imports) + } + + /** + * Optimizes a proof by applying all the available reduction rules until a fixed point is reached. + * @param proof the proof to be optimized + * @return the optimized proof + */ + def optimizeProofIteratively(proof: SCProof): SCProof = { + def optimizeFixedPoint(proof: SCProof): SCProof = { + val optimized = deadStepsElimination(factorProof(simplifyProof(proof))) + if (optimized == proof) optimized else optimizeFixedPoint(optimized) + } + optimizeFixedPoint(flattenProof(proof)) + } + +} diff --git a/lisa-utils/src/main/scala/lisa/utils/TheoriesHelpers.scala b/lisa-utils/src/main/scala/lisa/utils/TheoriesHelpers.scala index 384aff2acbcaa5a8c6c0bec3f00fb1becb3158cc..504c9077321f920874c26c4df5d990dfd255ee54 100644 --- a/lisa-utils/src/main/scala/lisa/utils/TheoriesHelpers.scala +++ b/lisa-utils/src/main/scala/lisa/utils/TheoriesHelpers.scala @@ -76,10 +76,12 @@ trait TheoriesHelpers extends KernelHelpers { case d: RunningTheory#Definition => d match { case pd: RunningTheory#PredicateDefinition => - output(s" Definition of predicate symbol ${pd.label.id} := ${Printer.prettyFormula(pd.label(pd.expression.vars.map(VariableTerm)*) <=> pd.expression.body)}\n") // (label, args, phi) + output( + s" Definition of predicate symbol ${pd.label.id} := ${Printer.prettyFormula(pd.label(pd.expression.vars.map(VariableTerm.apply)*) <=> pd.expression.body)}\n" + ) // (label, args, phi) case fd: RunningTheory#FunctionDefinition => - output(s" Definition of function symbol ${Printer.prettyTerm(fd.label(fd.expression.vars.map(VariableTerm)*))} := the ${fd.out.id} such that ${Printer - .prettyFormula((fd.out === fd.label(fd.expression.vars.map(VariableTerm)*)) <=> fd.expression.body)})\n") + output(s" Definition of function symbol ${Printer.prettyTerm(fd.label(fd.expression.vars.map(VariableTerm.apply)*))} := the ${fd.out.id} such that ${Printer + .prettyFormula((fd.out === fd.label(fd.expression.vars.map(VariableTerm.apply)*)) <=> fd.expression.body)})\n") } } just diff --git a/lisa-utils/src/test/scala/lisa/kernel/FolTests.scala b/lisa-utils/src/test/scala/lisa/kernel/FolTests.scala index 19c8e6702ec9ab648b59e3af2b0028c916924302..c3e944f384a128592c3c19db01590b8c3a0c21b8 100644 --- a/lisa-utils/src/test/scala/lisa/kernel/FolTests.scala +++ b/lisa-utils/src/test/scala/lisa/kernel/FolTests.scala @@ -28,7 +28,7 @@ class FolTests extends AnyFunSuite { val r = gen.between(0, 3) if (r == 0) { val name = "" + ('a' to 'e')(gen.between(0, 5)) - FunctionTerm(ConstantFunctionLabel(name, 0), List()) + Term(ConstantFunctionLabel(name, 0), List()) } else { val name = "" + ('v' to 'z')(gen.between(0, 5)) VariableTerm(VariableLabel(name)) @@ -38,16 +38,16 @@ class FolTests extends AnyFunSuite { val name = "" + ('f' to 'j')(gen.between(0, 5)) if (r == 0) { val name = "" + ('a' to 'e')(gen.between(0, 5)) - FunctionTerm(ConstantFunctionLabel(name, 0), List()) + Term(ConstantFunctionLabel(name, 0), List()) } else if (r == 1) { val name = "" + ('v' to 'z')(gen.between(0, 5)) VariableTerm(VariableLabel(name)) } - if (r <= 3) FunctionTerm(ConstantFunctionLabel(name, 1), Seq(termGenerator(maxDepth - 1, gen))) - else if (r <= 5) FunctionTerm(ConstantFunctionLabel(name, 2), Seq(termGenerator(maxDepth - 1, gen), termGenerator(maxDepth - 1, gen))) - else if (r == 6) FunctionTerm(ConstantFunctionLabel(name, 3), Seq(termGenerator(maxDepth - 1, gen), termGenerator(maxDepth - 1, gen), termGenerator(maxDepth - 1, gen))) + if (r <= 3) Term(ConstantFunctionLabel(name, 1), Seq(termGenerator(maxDepth - 1, gen))) + else if (r <= 5) Term(ConstantFunctionLabel(name, 2), Seq(termGenerator(maxDepth - 1, gen), termGenerator(maxDepth - 1, gen))) + else if (r == 6) Term(ConstantFunctionLabel(name, 3), Seq(termGenerator(maxDepth - 1, gen), termGenerator(maxDepth - 1, gen), termGenerator(maxDepth - 1, gen))) else - FunctionTerm( + Term( ConstantFunctionLabel(name, 4), Seq(termGenerator(maxDepth - 1, gen), termGenerator(maxDepth - 1, gen), termGenerator(maxDepth - 1, gen), termGenerator(maxDepth - 1, gen)) ) diff --git a/lisa-utils/src/test/scala/lisa/kernel/InvalidProofPathTests.scala b/lisa-utils/src/test/scala/lisa/kernel/InvalidProofPathTests.scala index 257f5193911d3f258ebbb191d3af02c906a9eb3f..1bf566e3b477295a18305539a7486eae43107baf 100644 --- a/lisa-utils/src/test/scala/lisa/kernel/InvalidProofPathTests.scala +++ b/lisa-utils/src/test/scala/lisa/kernel/InvalidProofPathTests.scala @@ -2,7 +2,7 @@ package lisa.kernel import lisa.kernel.proof.SCProofCheckerJudgement.SCInvalidProof import lisa.kernel.proof.SequentCalculus.* -import lisa.kernel.proof._ +import lisa.kernel.proof.* import lisa.test.ProofCheckerSuite import lisa.utils.Helpers.{_, given} diff --git a/lisa-utils/src/test/scala/lisa/test/TestTheoryLibrary.scala b/lisa-utils/src/test/scala/lisa/test/TestTheoryLibrary.scala index 2431591566d8735ef056e47994f4f3115a6e0378..3e06e42b6e9f85bb3d0c819ca4882dbfc17f9cbb 100644 --- a/lisa-utils/src/test/scala/lisa/test/TestTheoryLibrary.scala +++ b/lisa-utils/src/test/scala/lisa/test/TestTheoryLibrary.scala @@ -1,5 +1,7 @@ package lisa.test -object TestTheoryLibrary extends lisa.utils.Library(TestTheory.runningTestTheory) { +import lisa.utils.Library + +object TestTheoryLibrary extends Library(TestTheory.runningTestTheory) { export TestTheory.* } diff --git a/lisa-utils/src/test/scala/lisa/utils/ParserTest.scala b/lisa-utils/src/test/scala/lisa/utils/ParserTest.scala index 73eabd63a91fa5b6d39a05aeb0063933e37f7fad..8930b571eae0e8aaca64ca51164be202c93e0511 100644 --- a/lisa-utils/src/test/scala/lisa/utils/ParserTest.scala +++ b/lisa-utils/src/test/scala/lisa/utils/ParserTest.scala @@ -7,7 +7,7 @@ import org.scalatest.funsuite.AnyFunSuite class ParserTest extends AnyFunSuite with TestUtils { test("constant") { - assert(Parser.parseTerm("x") == FunctionTerm(cx, Seq())) + assert(Parser.parseTerm("x") == Term(cx, Seq())) } test("variable") { @@ -15,29 +15,29 @@ class ParserTest extends AnyFunSuite with TestUtils { } test("constant function application") { - assert(Parser.parseTerm("f()") == FunctionTerm(f0, Seq())) - assert(Parser.parseTerm("f(x)") == FunctionTerm(f1, Seq(cx))) - assert(Parser.parseTerm("f(x, y)") == FunctionTerm(f2, Seq(cx, cy))) - assert(Parser.parseTerm("f(x, y, z)") == FunctionTerm(f3, Seq(cx, cy, cz))) + assert(Parser.parseTerm("f()") == Term(f0, Seq())) + assert(Parser.parseTerm("f(x)") == Term(f1, Seq(cx))) + assert(Parser.parseTerm("f(x, y)") == Term(f2, Seq(cx, cy))) + assert(Parser.parseTerm("f(x, y, z)") == Term(f3, Seq(cx, cy, cz))) - assert(Parser.parseTerm("f(?x)") == FunctionTerm(f1, Seq(x))) - assert(Parser.parseTerm("f(?x, ?y)") == FunctionTerm(f2, Seq(x, y))) - assert(Parser.parseTerm("f(?x, ?y, ?z)") == FunctionTerm(f3, Seq(x, y, z))) + assert(Parser.parseTerm("f(?x)") == Term(f1, Seq(x))) + assert(Parser.parseTerm("f(?x, ?y)") == Term(f2, Seq(x, y))) + assert(Parser.parseTerm("f(?x, ?y, ?z)") == Term(f3, Seq(x, y, z))) } test("schematic function application") { // Parser.parseTerm("?f()") -- schematic functions of 0 arguments do not exist, those are variables - assert(Parser.parseTerm("?f(x)") == FunctionTerm(sf1, Seq(cx))) - assert(Parser.parseTerm("?f(x, y)") == FunctionTerm(sf2, Seq(cx, cy))) - assert(Parser.parseTerm("?f(x, y, z)") == FunctionTerm(sf3, Seq(cx, cy, cz))) + assert(Parser.parseTerm("?f(x)") == Term(sf1, Seq(cx))) + assert(Parser.parseTerm("?f(x, y)") == Term(sf2, Seq(cx, cy))) + assert(Parser.parseTerm("?f(x, y, z)") == Term(sf3, Seq(cx, cy, cz))) - assert(Parser.parseTerm("?f(?x)") == FunctionTerm(sf1, Seq(x))) - assert(Parser.parseTerm("?f(?x, ?y)") == FunctionTerm(sf2, Seq(x, y))) - assert(Parser.parseTerm("?f(?x, ?y, ?z)") == FunctionTerm(sf3, Seq(x, y, z))) + assert(Parser.parseTerm("?f(?x)") == Term(sf1, Seq(x))) + assert(Parser.parseTerm("?f(?x, ?y)") == Term(sf2, Seq(x, y))) + assert(Parser.parseTerm("?f(?x, ?y, ?z)") == Term(sf3, Seq(x, y, z))) } test("nested function application") { - assert(Parser.parseTerm("?f(?f(?x), ?y)") == FunctionTerm(sf2, Seq(FunctionTerm(sf1, Seq(x)), y))) + assert(Parser.parseTerm("?f(?f(?x), ?y)") == Term(sf2, Seq(Term(sf1, Seq(x)), y))) } test("0-ary predicate") { diff --git a/lisa-utils/src/test/scala/lisa/utils/PrinterTest.scala b/lisa-utils/src/test/scala/lisa/utils/PrinterTest.scala index 3a45954ca8ac2e251ea1faa0b57003e15b3cf1e9..e8c2a63c90a58f899a4306ff19a9a69046386a8b 100644 --- a/lisa-utils/src/test/scala/lisa/utils/PrinterTest.scala +++ b/lisa-utils/src/test/scala/lisa/utils/PrinterTest.scala @@ -40,7 +40,7 @@ class PrinterTest extends AnyFunSuite with TestUtils { } test("constant") { - assert(Parser.printTerm(FunctionTerm(cx, Seq())) == "x") + assert(Parser.printTerm(Term(cx, Seq())) == "x") } test("variable") { @@ -48,27 +48,27 @@ class PrinterTest extends AnyFunSuite with TestUtils { } test("constant function application") { - assert(Parser.printTerm(FunctionTerm(f1, Seq(cx))) == "f(x)") - assert(Parser.printTerm(FunctionTerm(f2, Seq(cx, cy))) == "f(x, y)") - assert(Parser.printTerm(FunctionTerm(f3, Seq(cx, cy, cz))) == "f(x, y, z)") + assert(Parser.printTerm(Term(f1, Seq(cx))) == "f(x)") + assert(Parser.printTerm(Term(f2, Seq(cx, cy))) == "f(x, y)") + assert(Parser.printTerm(Term(f3, Seq(cx, cy, cz))) == "f(x, y, z)") - assert(Parser.printTerm(FunctionTerm(f1, Seq(x))) == "f(?x)") - assert(Parser.printTerm(FunctionTerm(f2, Seq(x, y))) == "f(?x, ?y)") - assert(Parser.printTerm(FunctionTerm(f3, Seq(x, y, z))) == "f(?x, ?y, ?z)") + assert(Parser.printTerm(Term(f1, Seq(x))) == "f(?x)") + assert(Parser.printTerm(Term(f2, Seq(x, y))) == "f(?x, ?y)") + assert(Parser.printTerm(Term(f3, Seq(x, y, z))) == "f(?x, ?y, ?z)") } test("schematic function application") { - assert(Parser.printTerm(FunctionTerm(sf1, Seq(cx))) == "?f(x)") - assert(Parser.printTerm(FunctionTerm(sf2, Seq(cx, cy))) == "?f(x, y)") - assert(Parser.printTerm(FunctionTerm(sf3, Seq(cx, cy, cz))) == "?f(x, y, z)") + assert(Parser.printTerm(Term(sf1, Seq(cx))) == "?f(x)") + assert(Parser.printTerm(Term(sf2, Seq(cx, cy))) == "?f(x, y)") + assert(Parser.printTerm(Term(sf3, Seq(cx, cy, cz))) == "?f(x, y, z)") - assert(Parser.printTerm(FunctionTerm(sf1, Seq(x))) == "?f(?x)") - assert(Parser.printTerm(FunctionTerm(sf2, Seq(x, y))) == "?f(?x, ?y)") - assert(Parser.printTerm(FunctionTerm(sf3, Seq(x, y, z))) == "?f(?x, ?y, ?z)") + assert(Parser.printTerm(Term(sf1, Seq(x))) == "?f(?x)") + assert(Parser.printTerm(Term(sf2, Seq(x, y))) == "?f(?x, ?y)") + assert(Parser.printTerm(Term(sf3, Seq(x, y, z))) == "?f(?x, ?y, ?z)") } test("nested function application") { - assert(Parser.printTerm(FunctionTerm(sf2, Seq(FunctionTerm(sf1, Seq(x)), y))) == "?f(?f(?x), ?y)") + assert(Parser.printTerm(Term(sf2, Seq(Term(sf1, Seq(x)), y))) == "?f(?f(?x), ?y)") } test("0-ary predicate") { diff --git a/lisa-utils/src/test/scala/lisa/utils/SCProofStepFinderTests.scala b/lisa-utils/src/test/scala/lisa/utils/SCProofStepFinderTests.scala new file mode 100644 index 0000000000000000000000000000000000000000..bc5b1da08e570612e165551c560f014a6547381a --- /dev/null +++ b/lisa-utils/src/test/scala/lisa/utils/SCProofStepFinderTests.scala @@ -0,0 +1,190 @@ +package lisa.utils + +import org.scalatest.funsuite.AnyFunSuite + +import scala.language.adhocExtensions +/* +import utilities.Helpers.* +import utilities.Printer +import lisa.kernel.fol.FOL.* +import lisa.kernel.proof.* +import lisa.kernel.proof.SequentCalculus.* +import lisa.settheory.AxiomaticSetTheory.* +import lisa.kernel.proof.SCProofCheckerJudgement.* +import org.scalatest.funsuite.AnyFunSuite +import me.cassayre.florian.masterproject.util.SCProofBuilder.{_{_, given} + +import util.chaining.* +import scala.util.{Failure, Success, Try} + */ +class SCProofStepFinderTests extends AnyFunSuite { + /* + test("proof steps reconstruction") { + // These tests ensure that all the kernel proof steps can be generated + // To achieve that, we design proofs that require these steps to be used at some point + + val (x, y, z) = (VariableLabel("x"), VariableLabel("y"), VariableLabel("z")) + val theory = new RunningTheory() + val (la, lb, lc) = (ConstantPredicateLabel("a", 0), ConstantPredicateLabel("b", 0), ConstantPredicateLabel("c", 0)) + Seq(la, lb, lc).foreach(theory.addSymbol) + val (a, b, c) = (PredicateFormula(la, Seq.empty), PredicateFormula(lb, Seq.empty), PredicateFormula(lc, Seq.empty)) + val (ls, lt) = (ConstantFunctionLabel("s", 0), ConstantFunctionLabel("t", 0)) + Seq(ls, lt).foreach(theory.addSymbol) + val (s, t) = (FunctionTerm(ls, Seq.empty), FunctionTerm(lt, Seq.empty)) + + implicit class VariableLabelEq(l: VariableLabel) { // Helper due to conflicts with scalatest's `===` + def ====(r: Term): Formula = PredicateFormula(equality, Seq(VariableTerm(l), r)) + def ====(r: VariableLabel): Formula = PredicateFormula(equality, Seq(VariableTerm(l), VariableTerm(r))) + } + implicit class FunctionLabelEq(l: FunctionLabel) { + def ====(r: Term): Formula = PredicateFormula(equality, Seq(FunctionTerm(l, Seq.empty), r)) + def ====(r: FunctionLabel): Formula = PredicateFormula(equality, Seq(FunctionTerm(l, Seq.empty), FunctionTerm(r, Seq.empty))) + } + implicit class TermEq(l: Term) { + def ====(r: Term): Formula = PredicateFormula(equality, Seq(l, r)) + def ====(r: FunctionLabel): Formula = PredicateFormula(equality, Seq(l, FunctionTerm(r, Seq.empty))) + } + + val proofs: Seq[((String, SCProofBuilder), PartialFunction[SCProofStep, Unit])] = Seq( + // (1.1) + "Hypothesis" -> SCProofBuilder( + a |- a, + ) -> { case _: Hypothesis => () }, + "Cut" -> SCProofBuilder( + a |- a, + Seq(a, b) |- a by 0, + c |- c, + c |- Seq(b, c) by 2, + Seq(a, c) |- Seq(a, c) by (1, 3), + ) -> { case _: Cut => () }, + "LeftAnd" -> SCProofBuilder( + a |- a, + (a /\ b) |- a by 0, + ) -> { case _: LeftAnd => () }, + "RightAnd" -> SCProofBuilder( + a |- a, + b |- b, + Seq(a, b) |- (a /\ b) by (0, 1), + ) -> { case _: RightAnd => () }, + "LeftOr" -> SCProofBuilder( + a |- a, + b |- b, + Seq(a, b, a \/ b) |- Seq(a, b) by (0, 1) + ) -> { case _: LeftOr => () }, + "RightOr" -> SCProofBuilder( + a |- a, + a |- (a \/ b) by 0, + ) -> { case _: RightOr => () }, + "LeftImplies" -> SCProofBuilder( + a |- a, + b |- b, + Seq(a, a ==> b) |- b by (0, 1), + ) -> { case _: LeftImplies => () }, + "RightImplies" -> SCProofBuilder( + a |- a, + a |- Seq(a, a ==> a) by 0, + ) -> { case _: RightImplies => () }, + "LeftIff" -> SCProofBuilder( + (a ==> b) |- (a ==> b), + (a <=> b) |- (a ==> b) by 0, + ) -> { case _: LeftIff => () }, + "RightIff" -> SCProofBuilder( + (a ==> b) |- (a ==> b), + (b ==> a) |- (b ==> a), + Seq(a ==> b, b ==> a) |- (a <=> b) by (0, 1), + ) -> { case _: RightIff => () }, + "LeftNot" -> SCProofBuilder( + a |- a, + a |- Seq(a, b) by 0, + Seq(a, !b) |- a by 1, + ) -> { case _: LeftNot => () }, + "RightNot" -> SCProofBuilder( + a |- a, + Seq(a, b) |- a by 0, + a |- Seq(a, !b) by 1, + ) -> { case _: RightNot => () }, + "LeftForall" -> SCProofBuilder( + (y ==== z) |- (y ==== z), + forall(x, x ==== z) |- (y ==== z) by 0, + ) -> { case _: LeftForall => () }, + "RightForall" -> SCProofBuilder( + (y ==== z) |- (y ==== z), + (y ==== z) |- forall(x, y ==== z) by 0, + ) -> { case _: RightForall => () }, + "LeftExists" -> SCProofBuilder( + (y ==== z) |- (y ==== z), + exists(x, y ==== z) |- (y ==== z) by 0, + ) -> { case _: LeftExists => () }, + "RightExists" -> SCProofBuilder( + (y ==== z) |- (y ==== z), + (y ==== z) |- exists(x, x ==== z) by 0, + ) -> { case _: RightExists => () }, + "LeftExistsOne" -> SCProofBuilder( + exists(y, forall(x, (x ==== y) <=> a)).pipe(f => f |- f), + existsOne(x, a) |- exists(y, forall(x, (x ==== y) <=> a)) by 0, + ) -> { case _: LeftExistsOne => () }, + "RightExistsOne" -> SCProofBuilder( + exists(y, forall(x, (x ==== y) <=> a)).pipe(f => f |- f), + exists(y, forall(x, (x ==== y) <=> a)) |- existsOne(x, a) by 0, + ) -> { case _: RightExistsOne => () }, + "(Left)Weakening" -> SCProofBuilder( + a |- a, + Seq(a, b) |- a by 0, + ) -> { case _: Weakening => () }, + "(Right)Weakening" -> SCProofBuilder( + a |- a, + a |- Seq(a, b) by 0, + ) -> { case _: Weakening => () }, + // (1.2) + "LeftSubstEq" -> SCProofBuilder( + (s ==== emptySet) |- (s ==== emptySet), + Seq(s ==== t, t ==== emptySet) |- (s ==== emptySet) by 0, + ) -> { case _: LeftSubstEq => () }, + "RightSubstEq" -> SCProofBuilder( + (s ==== emptySet) |- (s ==== emptySet), + Seq(s ==== emptySet, s ==== t) |- Seq(s ==== emptySet, t ==== emptySet) by 0, + ) -> { case _: RightSubstEq => () }, + "LeftSubstIff" -> SCProofBuilder( + a |- a, + Seq(b, a <=> b) |- a by 0, + ) -> { case _: LeftSubstIff => () }, + "RightSubstIff" -> SCProofBuilder( + a |- a, + Seq(a, a <=> b) |- b by 0, + ) -> { case _: RightSubstIff => () }, + "LeftRefl" -> SCProofBuilder( + a |- a, + Seq(a, b) |- a by 0, + Seq(a, b, emptySet ==== emptySet) |- a by 1, + Seq(a, b) |- a by 2, + ) -> { case _: LeftRefl => () }, + "RightRefl" -> SCProofBuilder( + () |- (emptySet ==== emptySet), + ) -> { case _: RightRefl => () }, + ) + + proofs.foreach { case ((testname, proofBuilder), partialFunction) => + val filter: SCProofStep => Boolean = partialFunction.lift(_).nonEmpty + Try(proofBuilder.build) match { + case Success(proof) => + SCProofChecker.checkSCProof(proof) match { + case SCValidProof(_) => // OK + println(testname) + println(Printer.prettySCProof(proof)) + println() + // Dirty, but only way to test that + val proofWithoutLast = proof.copy(steps = proof.steps.init) + proofBuilder.steps.last match { + case SCImplicitProofStep(conclusion, premises, imports) => + val view = SCProofStepFinder.findPossibleProofSteps(proofWithoutLast, conclusion, premises) + assert(view.exists(filter), s"The proof step finder was not able to find the step '$testname'") + case SCExplicitProofStep(step) => assert(false) + } + case invalid: SCInvalidProof => throw new AssertionError(s"The reconstructed proof for '$testname' is incorrect:\n${Printer.prettySCProof(invalid)}") + } + case Failure(exception) => throw new AssertionError(s"Couldn't reconstruct the proof for '$testname'", exception) // Couldn't reconstruct this proof + } + } + } + */ +} diff --git a/lisa-utils/src/test/scala/lisa/utils/TestUtils.scala b/lisa-utils/src/test/scala/lisa/utils/TestUtils.scala index 93976742db07b36304b86a94952184bf4b8e8970..3d7ee4df06c357fdc01100bdbca1679d3eb657d2 100644 --- a/lisa-utils/src/test/scala/lisa/utils/TestUtils.scala +++ b/lisa-utils/src/test/scala/lisa/utils/TestUtils.scala @@ -11,11 +11,11 @@ trait TestUtils { val (cx, cy, cz) = (ConstantFunctionLabel("x", 0), ConstantFunctionLabel("y", 0), ConstantFunctionLabel("z", 0)) val (f0, f1, f2, f3) = (ConstantFunctionLabel("f", 0), ConstantFunctionLabel("f", 1), ConstantFunctionLabel("f", 2), ConstantFunctionLabel("f", 3)) val (sf1, sf2, sf3) = (SchematicFunctionLabel("f", 1), SchematicFunctionLabel("f", 2), SchematicFunctionLabel("f", 3)) - val (sPhi1, sPhi2) = (SchematicNPredicateLabel("phi", 1), SchematicNPredicateLabel("phi", 2)) + val (sPhi1, sPhi2) = (SchematicPredicateLabel("phi", 1), SchematicPredicateLabel("phi", 2)) given Conversion[PredicateLabel, PredicateFormula] = PredicateFormula(_, Seq.empty) - given Conversion[ConstantFunctionLabel, FunctionTerm] = FunctionTerm(_, Seq()) + given Conversion[ConstantFunctionLabel, Term] = Term(_, Seq()) - given Conversion[VariableLabel, VariableTerm] = VariableTerm.apply + given Conversion[VariableLabel, Term] = VariableTerm.apply } diff --git a/src/main/scala/lisa/automation/Proof2.scala b/src/main/scala/lisa/automation/Proof2.scala new file mode 100644 index 0000000000000000000000000000000000000000..f29aa4650bc925b69b71b3982f2462b7e0000f31 --- /dev/null +++ b/src/main/scala/lisa/automation/Proof2.scala @@ -0,0 +1,51 @@ +package lisa.automation + +/** + * The proof package. + */ +object Proof2 { + export lisa.front.proof.Proof.* + export lisa.automation.front.predef.Predef.* + val introHypo: RuleHypothesis.type = RuleHypothesis + val introLAnd: RuleIntroductionLeftAnd.type = RuleIntroductionLeftAnd + val introRAnd: RuleIntroductionRightAnd.type = RuleIntroductionRightAnd + val introLOr: RuleIntroductionLeftOr.type = RuleIntroductionLeftOr + val introROr: RuleIntroductionRightOr.type = RuleIntroductionRightOr + val introLImp: RuleIntroductionLeftImplies.type = RuleIntroductionLeftImplies + val introRImp: RuleIntroductionRightImplies.type = RuleIntroductionRightImplies + val introLIff: RuleIntroductionLeftIff.type = RuleIntroductionLeftIff + val introRIff: RuleIntroductionRightIff.type = RuleIntroductionRightIff + val introLNot: RuleIntroductionLeftNot.type = RuleIntroductionLeftNot + val introRNot: RuleIntroductionRightNot.type = RuleIntroductionRightNot + val introRRefl: RuleIntroductionRightRefl.type = RuleIntroductionRightRefl + val introLForall: RuleIntroductionLeftForall.type = RuleIntroductionLeftForall + val introRForall: RuleIntroductionRightForall.type = RuleIntroductionRightForall + val introLExists: RuleIntroductionLeftExists.type = RuleIntroductionLeftExists + val introRExists: RuleIntroductionRightExists.type = RuleIntroductionRightExists + val introLSubstEq: RuleIntroductionLeftSubstEq.type = RuleIntroductionLeftSubstEq + val introRSubstEq: RuleIntroductionRightSubstEq.type = RuleIntroductionRightSubstEq + val introLSubstIff: RuleIntroductionLeftSubstIff.type = RuleIntroductionLeftSubstIff + val introRSubstIff: RuleIntroductionRightSubstIff.type = RuleIntroductionRightSubstIff + // RuleIntroductionLeftExistsOne & RuleIntroductionRightExistsOne + val introRForallS: RuleIntroductionRightForallSchema.type = RuleIntroductionRightForallSchema + val introLExistsS: RuleIntroductionLeftExistsSchema.type = RuleIntroductionLeftExistsSchema + + val elimCut: RuleCut.type = RuleCut + val elimLRefl: RuleEliminationLeftRefl.type = RuleEliminationLeftRefl + val elimRForallS: RuleEliminationRightForallSchema.type = RuleEliminationRightForallSchema + val elimLSubstIff: RuleEliminationLeftSubstIff.type = RuleEliminationLeftSubstIff + val elimRSubstIff: RuleEliminationRightSubstIff.type = RuleEliminationRightSubstIff + val elimLSubstEq: RuleEliminationLeftSubstEq.type = RuleEliminationLeftSubstEq + val elimRSubstEq: RuleEliminationRightSubstEq.type = RuleEliminationRightSubstEq + val elimRNot: RuleEliminationRightNot.type = RuleEliminationRightNot + + val instFunS: TacticInstantiateFunctionSchema.type = TacticInstantiateFunctionSchema + + val solvePropFast: TacticSolverNative.type = TacticSolverNative + val solveProp: TacticPropositionalSolver.type = TacticPropositionalSolver + val rewrite: TacticalRewrite.type = TacticalRewrite + val weaken: TacticalWeaken.type = TacticalWeaken + + val justificationInst: TacticInstantiateApplyJustification.type = TacticInstantiateApplyJustification + +} diff --git a/src/main/scala/lisa/automation/front/predef/Predef.scala b/src/main/scala/lisa/automation/front/predef/Predef.scala new file mode 100644 index 0000000000000000000000000000000000000000..b275add8f36637cb41ee287e1c0dd96b3942fc73 --- /dev/null +++ b/src/main/scala/lisa/automation/front/predef/Predef.scala @@ -0,0 +1,3 @@ +package lisa.automation.front.predef + +object Predef extends PredefRulesDefinitions with PredefTacticsDefinitions with PredefCombinedDefinitions {} diff --git a/src/main/scala/lisa/automation/front/predef/PredefCombinedDefinitions.scala b/src/main/scala/lisa/automation/front/predef/PredefCombinedDefinitions.scala new file mode 100644 index 0000000000000000000000000000000000000000..8c47760c8f8287a2c3e42f13da248c7b7c7ddc90 --- /dev/null +++ b/src/main/scala/lisa/automation/front/predef/PredefCombinedDefinitions.scala @@ -0,0 +1,25 @@ +package lisa.automation.front.predef + +import lisa.front.proof.Proof.* + +trait PredefCombinedDefinitions extends PredefRulesDefinitions { + + val TacticPropositionalSolver: Tactic = TacticRepeat( + TacticFallback( + Seq( + RuleHypothesis, + RuleIntroductionLeftAnd, + RuleIntroductionRightAnd, + RuleIntroductionLeftOr, + RuleIntroductionRightOr, + RuleIntroductionLeftImplies, + RuleIntroductionRightImplies, + RuleIntroductionLeftIff, + RuleIntroductionRightIff, + RuleIntroductionLeftNot, + RuleIntroductionRightNot + ) + ) + ) + +} diff --git a/src/main/scala/lisa/automation/front/predef/PredefRulesDefinitions.scala b/src/main/scala/lisa/automation/front/predef/PredefRulesDefinitions.scala new file mode 100644 index 0000000000000000000000000000000000000000..49fe2ec40b9edfa929af8c71c6975c8a41b788a3 --- /dev/null +++ b/src/main/scala/lisa/automation/front/predef/PredefRulesDefinitions.scala @@ -0,0 +1,459 @@ +package lisa.automation.front.predef + +import lisa.front.fol.FOL.* +import lisa.front.proof.Proof.* +import lisa.front.proof.state.RuleDefinitions +import lisa.kernel.fol.FOL.LambdaFormulaFormula +import lisa.kernel.fol.FOL.LambdaTermFormula +import lisa.kernel.fol.FOL.LambdaTermTerm +import lisa.kernel.proof.SequentCalculus.* + +trait PredefRulesDefinitions { + + private case class SideBuilder(formulas: IndexedSeq[Formula], partial: Boolean) { + def |-(other: SideBuilder): PartialSequent = PartialSequent(formulas, other.formulas, partial, other.partial) + } + private def *(formulas: Formula*): SideBuilder = SideBuilder(formulas.toIndexedSeq, true) + private def $(formulas: Formula*): SideBuilder = SideBuilder(formulas.toIndexedSeq, false) + // This *must* be a def (see https://github.com/lampepfl/dotty/issues/14667) + private def ** : SideBuilder = SideBuilder(IndexedSeq.empty, true) + private def $$ : SideBuilder = SideBuilder(IndexedSeq.empty, false) + // private def &(hypotheses: PartialSequent*): IndexedSeq[PartialSequent] = hypotheses.toIndexedSeq + private given Conversion[PartialSequent, IndexedSeq[PartialSequent]] = IndexedSeq(_) + private val __ : IndexedSeq[PartialSequent] = IndexedSeq.empty + + import Notations.* + + case object RuleHypothesis + extends RuleBase( + __, + *(a) |- *(a), + (bot, ctx) => IndexedSeq(Hypothesis(bot, ctx(a))) + ) + + // Introduction + + case object RuleIntroductionLeftAnd + extends RuleBase( + *(a, b) |- **, + *(a /\ b) |- **, + (bot, ctx) => IndexedSeq(LeftAnd(bot, -1, ctx(a), ctx(b))) + ) + + case object RuleIntroductionRightAnd + extends RuleBase( + (** |- *(a)) :+ (** |- *(b)), + ** |- *(a /\ b), + (bot, ctx) => IndexedSeq(RightAnd(bot, Seq(-1, -2), Seq(ctx(a), ctx(b)))) + ) + + case object RuleIntroductionLeftOr + extends RuleBase( + (*(a) |- **) :+ (*(b) |- **), + *(a \/ b) |- **, + (bot, ctx) => IndexedSeq(LeftOr(bot, Seq(-1, -2), Seq(ctx(a), ctx(b)))) + ) + + case object RuleIntroductionRightOr + extends RuleBase( + ** |- *(a, b), + ** |- *(a \/ b), + (bot, ctx) => IndexedSeq(RightOr(bot, -1, ctx(a), ctx(b))) + ) + + case object RuleIntroductionLeftImplies + extends RuleBase( + (** |- *(a)) :+ (*(b) |- **), + *(a ==> b) |- **, + (bot, ctx) => IndexedSeq(LeftImplies(bot, -1, -2, ctx(a), ctx(b))) + ) + + case object RuleIntroductionRightImplies + extends RuleBase( + *(a) |- *(b), + ** |- *(a ==> b), + (bot, ctx) => IndexedSeq(RightImplies(bot, -1, ctx(a), ctx(b))) + ) + + case object RuleIntroductionLeftIff + extends RuleBase( + *(a ==> b, b ==> a) |- **, + *(a <=> b) |- **, + (bot, ctx) => IndexedSeq(LeftIff(bot, -1, ctx(a), ctx(b))) + ) + + case object RuleIntroductionRightIff + extends RuleBase( + (** |- *(a ==> b)) :+ (** |- *(b ==> a)), + ** |- *(a <=> b), + (bot, ctx) => IndexedSeq(RightIff(bot, -1, -2, ctx(a), ctx(b))) + ) + + case object RuleIntroductionLeftNot + extends RuleBase( + ** |- *(a), + *(!a) |- **, + (bot, ctx) => IndexedSeq(LeftNot(bot, -1, ctx(a))) + ) + + case object RuleIntroductionRightNot + extends RuleBase( + *(a) |- **, + ** |- *(!a), + (bot, ctx) => IndexedSeq(RightNot(bot, -1, ctx(a))) + ) + + case object RuleIntroductionRightRefl + extends RuleBase( + __, + ** |- *(s === s), + (bot, ctx) => IndexedSeq(RightRefl(bot, ctx(s) === ctx(s))) + ) + + // Substitution + + case object RuleIntroductionLeftForall + extends RuleBase( + *(p(t)) |- **, + *(forall(x, p(x))) |- **, + (bot, ctx) => { + val lambda = ctx(p) + val px = lambda(VariableTerm(ctx(x))) + IndexedSeq( + LeftForall(bot, -1, px, ctx.variables(x), ctx(t)) + ) + } + ) + + case object RuleIntroductionRightForall + extends RuleBase( + ** |- *(p(x)), + ** |- *(forall(x, p(x))), + { + case (bot, ctx) if !(bot.left ++ bot.right).flatMap(_.freeVariables).contains(ctx(x)) => + // TODO x not already free in sequent; ideally this should be handled automatically in `Rule`, not here + val lambda = ctx(p) + val px = lambda(VariableTerm(ctx(x))) + IndexedSeq( + RightForall(bot, -1, px, ctx.variables(x)) + ) + } + ) + + case object RuleIntroductionLeftExists + extends RuleBase( + *(p(x)) |- **, + *(exists(x, p(x))) |- **, + { + case (bot, ctx) if !(bot.left ++ bot.right).flatMap(_.freeVariables).contains(ctx(x)) => + val lambda = ctx(p) + val px = lambda(VariableTerm(ctx(x))) + IndexedSeq( + LeftExists(bot, -1, px, ctx.variables(x)) + ) + } + ) + + case object RuleIntroductionRightExists + extends RuleBase( + ** |- *(p(t)), + ** |- *(exists(x, p(x))), + (bot, ctx) => { + val lambda = ctx(p) + val px = lambda(VariableTerm(ctx(x))) + IndexedSeq( + RightExists(bot, -1, px, ctx.variables(x), ctx(t)) + ) + } + ) + + case object RuleIntroductionLeftExistsOne + extends RuleBase( + *(exists(y, exists(x, (x === y) <=> p(x)))) |- **, + *(existsOne(x, p(x))) |- **, + (bot, ctx) => { + // TODO y not free in p + val lambda = ctx(p) + val px = lambda(VariableTerm(ctx(x))) + ??? + } + ) + + // RuleIntroductionLeftExistsOne + + case object RuleIntroductionLeftSubstEq + extends RuleBase( + *(p(s)) |- **, + *(s === t, p(t)) |- **, + (bot, ctx) => + IndexedSeq( + LeftSubstEq(bot, -1, List(ctx(s) -> ctx(t)), ctx(p)) + ) + ) + + case object RuleIntroductionRightSubstEq + extends RuleBase( + ** |- *(p(s)), + *(s === t) |- *(p(t)), + (bot, ctx) => + IndexedSeq( + RightSubstEq(bot, -1, List(ctx(s) -> ctx(t)), ctx(p)) + ) + ) + + case object RuleIntroductionLeftSubstIff + extends RuleBase( + *(f(a)) |- **, + *(a <=> b, f(b)) |- **, + (bot, ctx) => + IndexedSeq( + LeftSubstIff(bot, -1, List(ctx(a) -> ctx(b)), ctx(f)) + ) + ) + + case object RuleIntroductionRightSubstIff + extends RuleBase( + ** |- *(f(a)), + *(a <=> b) |- *(f(b)), + (bot, ctx) => + IndexedSeq( + RightSubstIff(bot, -1, List(ctx(a) -> ctx(b)), ctx(f)) + ) + ) + + // + + case object RuleSubstituteRightIff + extends RuleBase( + (** |- *(f(a))) :+ ($$ |- $(a <=> b)), + ** |- *(f(b)), + (bot, ctx) => + IndexedSeq( + RightSubstIff(bot +< (ctx(a) <=> ctx(b)), -1, List(ctx(a) -> ctx(b)), ctx(f)), + Cut(bot, -2, 0, ctx(a) <=> ctx(b)) + ) + ) + + case object RuleSubstituteLeftIff + extends RuleBase( + (*(f(a)) |- **) :+ ($$ |- $(a <=> b)), + *(f(b)) |- **, + (bot, ctx) => + IndexedSeq( + LeftSubstIff(bot +< (ctx(a) <=> ctx(b)), -1, List(ctx(a) -> ctx(b)), ctx(f)), + Cut(bot, -2, 0, ctx(a) <=> ctx(b)) + ) + ) + + // Elimination + + case object RuleCut + extends RuleBase( + (** |- *(a)) :+ (*(a) |- **), + ** |- **, + (bot, ctx) => IndexedSeq(Cut(bot, -1, -2, ctx(a))) + ) + + case object RuleEliminationLeftRefl + extends RuleBase( + *(s === s) |- **, + ** |- **, + (bot, ctx) => IndexedSeq(LeftRefl(bot, -1, ctx(s) === ctx(s))) + ) + + case object RuleEliminationLeftAnd + extends RuleBase( + *(a /\ b) |- **, + *(a, b) |- **, + (bot, ctx) => + IndexedSeq( + Hypothesis(bot +> ctx(a), ctx(a)), + Hypothesis(bot +> ctx(b), ctx(b)), + RightAnd(bot +> (ctx(a) /\ ctx(b)), Seq(0, 1), Seq(ctx(a), ctx(b))), + Cut(bot, 2, -1, ctx(a) /\ ctx(b)) + ) + ) + + case object RuleEliminationRightOr + extends RuleBase( + ** |- *(a \/ b), + ** |- *(a, b), + (bot, ctx) => + IndexedSeq( + Hypothesis(bot +< ctx(a), ctx(a)), + Hypothesis(bot +< ctx(b), ctx(b)), + LeftOr(bot +< (ctx(a) \/ ctx(b)), Seq(0, 1), Seq(ctx(a), ctx(b))), + Cut(bot, -1, 2, ctx(a) \/ ctx(b)) + ) + ) + + case object RuleEliminationRightForallSchema + extends RuleBase( + ** |- *(forall(x, p(x))), + ** |- *(p(t)), + (bot, ctx) => { + val xlab: VariableLabel = ctx.variables(x) + val vx = VariableTerm(xlab) + val tv = ctx(t) + val (px, pt) = (ctx(p)(vx), ctx(p)(tv)) + val fpx = forall(ctx.variables(x), px) + val cBot = bot -> pt + IndexedSeq( + Hypothesis(bot +< pt, pt), + LeftForall(bot +< fpx, 0, px, xlab, tv), + Cut(bot, -1, 1, fpx) + ) + } + ) + + case object RuleModusPonens + extends RuleBase( + (** |- *(a)) :+ ($(a) |- $(b)), + ** |- *(b), + (bot, ctx) => + IndexedSeq( + Cut(bot, -1, -2, ctx(a)) + ) + ) + + case object RuleEliminationRightNot + extends RuleBase( + ** |- *(!a), + *(a) |- **, + (bot, ctx) => + IndexedSeq( + Hypothesis(ctx(a) |- ctx(a), ctx(a)), + LeftNot((ctx(a), !ctx(a)) |- (), 0, ctx(a)), + Cut(bot, -1, 1, !ctx(a)) + ) + ) + + case object RuleIntroductionRightForallSchema + extends RuleBase( + ** |- *(p(t)), + ** |- *(forall(x, p(x))), + (bot, ctx) => { + ctx(t) match { + case Term(pl: SchematicTermLabel[?], Seq()) => + val xlab = ctx.variables(x) + val vx = VariableTerm(xlab) + val px = ctx(p)(xlab) + val cBot = bot -> forall(xlab, px) + val pBot = cBot +> px + require(!(bot.left ++ bot.right).flatMap(_.freeVariables).contains(ctx(x))) + require(!(pBot.left ++ pBot.right).flatMap(_.freeSchematicTermLabels).contains(pl)) + IndexedSeq( + InstFunSchema(pBot, -1, Map(toKernel(pl) -> LambdaFunction(vx))), + RightForall(bot, 0, px, xlab) + ) + case e => throw new MatchError(e) + } + } + ) + + case object RuleIntroductionLeftExistsSchema + extends RuleBase( + *(p(t)) |- **, + *(exists(x, p(x))) |- **, + (bot, ctx) => { + ctx(t) match { + case Term(pl: SchematicTermLabel[?], Seq()) => + val xlab = ctx.variables(x) + val vx = VariableTerm(xlab) + val px = ctx(p)(vx) + val cBot = bot -< exists(xlab, px) + val pBot = cBot +< px + require(!(bot.left ++ bot.right).flatMap(_.freeVariables).contains(ctx(x))) + require(!(pBot.left ++ pBot.right).flatMap(_.freeSchematicTermLabels).contains(pl)) + IndexedSeq( + InstFunSchema(pBot, -1, Map(toKernel(pl) -> LambdaFunction(vx))), + LeftExists(bot, 0, px, xlab) + ) + case e => throw new MatchError(e) + } + } + ) + + case object RuleEliminationLeftSubstEq + extends RuleBase( + (*(p(s)) |- **) +: (** |- *(s === t)), + *(p(t)) |- **, + (bot, ctx) => + IndexedSeq( + LeftSubstEq(bot +< (ctx(s) === ctx(t)), -1, List(ctx(s) -> ctx(t)), ctx(p)), + Cut(bot, -2, 0, ctx(s) === ctx(t)) + ) + ) + + case object RuleEliminationRightSubstEq + extends RuleBase( + (** |- *(p(s))) +: (** |- *(s === t)), + ** |- *(p(t)), + (bot, ctx) => + IndexedSeq( + RightSubstEq(bot +< (ctx(s) === ctx(t)), -1, List(ctx(s) -> ctx(t)), ctx(p)), + Cut(bot, -2, 0, ctx(s) === ctx(t)) + ) + ) + + case object RuleEliminationLeftSubstIff + extends RuleBase( + (*(f(a)) |- **) +: (** |- *(a <=> b)), + *(f(b)) |- **, + (bot, ctx) => + IndexedSeq( + LeftSubstIff(bot +< (ctx(a) <=> ctx(b)), -1, List(ctx(a) -> ctx(b)), ctx(f)), + Cut(bot, -2, 0, ctx(a) <=> ctx(b)) + ) + ) + + case object RuleEliminationRightSubstIff + extends RuleBase( + (** |- *(f(a))) +: (** |- *(a <=> b)), + ** |- *(f(b)), + (bot, ctx) => + IndexedSeq( + RightSubstIff(bot +< (ctx(a) <=> ctx(b)), -1, List(ctx(a) -> ctx(b)), ctx(f)), + Cut(bot, -2, 0, ctx(a) <=> ctx(b)) + ) + ) + + // TODO more rules + + // Move this + /*case class GeneralTacticRightIff(parameters: RuleTacticParameters) extends GeneralTactic { + import Notations.* + + override def apply(proofGoal: Sequent): Option[(IndexedSeq[Sequent], ReconstructGeneral)] = { + parameters.formulas.collect { case (IndexedSeq(), IndexedSeq(i)) if proofGoal.right.indices.contains(i) => + val formula = proofGoal.right(i) + val ea = instantiatePredicateSchemas(parameters.predicates(c), Map(e -> (parameters.predicates(a), Seq.empty))) + val eb = instantiatePredicateSchemas(parameters.predicates(c), Map(e -> (parameters.predicates(b), Seq.empty))) + if(formula == eb) { // TODO isSame + val bot: lisa.kernel.proof.SequentCalculus.Sequent = proofGoal + Some( + IndexedSeq( + proofGoal.copy(right = (proofGoal.right.take(i) :+ ea) ++ proofGoal.right.drop(i + 1)), + () |- parameters.predicates(a) <=> parameters.predicates(b), + ), + () => + IndexedSeq( + RightSubstIff( + bot +< (parameters.predicates(a) <=> parameters.predicates(b)), + -1, + parameters.predicates(a), + parameters.predicates(b), + parameters.predicates(c), // f(e) + e, + ), + Cut(bot, -2, 0, parameters.predicates(a) <=> parameters.predicates(b)) + ) + ) + } else { + None + } + }.flatten + } + }*/ + +} diff --git a/src/main/scala/lisa/automation/front/predef/PredefTacticsDefinitions.scala b/src/main/scala/lisa/automation/front/predef/PredefTacticsDefinitions.scala new file mode 100644 index 0000000000000000000000000000000000000000..22a9b758899dfc1f7d76ccc2b40130763d902929 --- /dev/null +++ b/src/main/scala/lisa/automation/front/predef/PredefTacticsDefinitions.scala @@ -0,0 +1,173 @@ +package lisa.automation.front.predef + +import lisa.automation.kernel.SimplePropositionalSolver +import lisa.front.fol.FOL.* +import lisa.front.proof.Proof.* +import lisa.front.proof.state.ProofEnvironmentDefinitions +import lisa.front.proof.unification.UnificationUtils +import lisa.kernel.proof.SCProof +import lisa.kernel.proof.SequentCalculus as KSC + +trait PredefTacticsDefinitions { + + case object TacticSolverNative extends TacticGoalFunctional { + import Notations.* + + override def apply(proofGoal: Sequent): Option[(IndexedSeq[Sequent], ReconstructSteps)] = { + val steps = SimplePropositionalSolver.solveSequent(proofGoal).steps + Some((IndexedSeq.empty, () => steps)) + } + } + case class TacticRewritePartial(left: Map[Int, Formula] = Map.empty, right: Map[Int, Formula] = Map.empty) extends TacticGoalFunctional { + override def apply(proofGoal: Sequent): Option[(IndexedSeq[Sequent], ReconstructSteps)] = { + if (left.keySet.forall(proofGoal.left.indices.contains) && right.keySet.forall(proofGoal.right.indices.contains)) { + val rewritten = Sequent( + proofGoal.left.indices.map(i => left.getOrElse(i, proofGoal.left(i))), + proofGoal.right.indices.map(i => right.getOrElse(i, proofGoal.right(i))) + ) + if (isSameSequent(proofGoal, rewritten)) { + Some((IndexedSeq(rewritten), () => IndexedSeq(KSC.Rewrite(rewritten, -1)))) + } else { + None + } + } else { + None + } + } + } + + case class TacticRewriteSequent(rewritten: Sequent) extends TacticGoalFunctional { + override def apply(proofGoal: Sequent): Option[(IndexedSeq[Sequent], ReconstructSteps)] = { + if (isSameSequent(proofGoal, rewritten)) { + Some((IndexedSeq(rewritten), () => IndexedSeq(KSC.Rewrite(proofGoal, -1)))) + } else { + None + } + } + } + + object TacticalRewrite { + def apply(left: Map[Int, Formula] = Map.empty, right: Map[Int, Formula] = Map.empty): TacticRewritePartial = + TacticRewritePartial(left, right) + def apply(rewritten: Sequent): TacticRewriteSequent = TacticRewriteSequent(rewritten) + } + + case class TacticWeakenPartial(left: Set[Int] = Set.empty, right: Set[Int] = Set.empty) extends TacticGoalFunctional { + override def apply(proofGoal: Sequent): Option[(IndexedSeq[Sequent], ReconstructSteps)] = { + if (left.forall(proofGoal.left.indices.contains) && right.forall(proofGoal.right.indices.contains)) { + val weaker = Sequent( + proofGoal.left.zipWithIndex.filter { case (_, i) => !left.contains(i) }.map { case (f, _) => f }, + proofGoal.right.zipWithIndex.filter { case (_, i) => !right.contains(i) }.map { case (f, _) => f } + ) + Some((IndexedSeq(weaker), () => IndexedSeq(KSC.Weakening(proofGoal, -1)))) + } else { + None + } + } + } + + case class TacticWeakenSequent(weaker: Sequent) extends TacticGoalFunctional { + override def apply(proofGoal: Sequent): Option[(IndexedSeq[Sequent], ReconstructSteps)] = { + // This can be made powerful with a matching algorithm + ??? + } + } + + object TacticalWeaken { + def apply(left: Set[Int] = Set.empty, right: Set[Int] = Set.empty): TacticWeakenPartial = + TacticWeakenPartial(left, right) + // def apply(weaker: Sequent): TacticWeakenSequent = TacticWeakenSequent(weaker) + } + + case class TacticInstantiateFunctionSchema(sequent: Sequent, assigned: AssignedFunction) extends TacticGoalFunctional { + override def apply(proofGoal: Sequent): Option[(IndexedSeq[Sequent], ReconstructSteps)] = { + val map = Seq(assigned) + val instantiated = Sequent( + sequent.left.map(formula => instantiateFormulaSchemas(formula, functions = map)), + sequent.right.map(formula => instantiateFormulaSchemas(formula, functions = map)) + ) + if (isSameSequent(proofGoal, instantiated)) { + Some( + ( + IndexedSeq(sequent), + () => IndexedSeq(KSC.InstFunSchema(proofGoal, -1, Map(toKernel(assigned.schema) -> assigned.lambda))) + ) + ) + } else { + None + } + } + } + + extension (theorem: Theorem) { + def apply(assigned: AssignedFunction): Theorem = { + val map: Seq[AssignedFunction] = Seq(assigned) + val replaced = Sequent( + theorem.sequent.left.map(formula => instantiateFormulaSchemas(formula, functions = map)), + theorem.sequent.right.map(formula => instantiateFormulaSchemas(formula, functions = map)) + ) + val scProof = SCProof( + IndexedSeq( + KSC.InstFunSchema(replaced, -1, Map(toKernel(assigned.schema) -> assigned.lambda)) + ), + IndexedSeq(sequentToKernel(theorem.sequent)) + ) + theorem.environment.mkTheorem(replaced, scProof, IndexedSeq(theorem)) + } + def apply(assigned: AssignedPredicate): Theorem = { + val map: Seq[AssignedPredicate] = Seq(assigned) + val replaced = Sequent( + theorem.sequent.left.map(formula => instantiateFormulaSchemas(formula, predicates = map)), + theorem.sequent.right.map(formula => instantiateFormulaSchemas(formula, predicates = map)) + ) + val scProof = SCProof( + IndexedSeq( + KSC.InstPredSchema(replaced, -1, Map(toKernel(assigned.schema) -> assigned.lambda)) + ), + IndexedSeq(sequentToKernel(theorem.sequent)) + ) + theorem.environment.mkTheorem(replaced, scProof, IndexedSeq(theorem)) + } + + def rewrite(rewritten: Sequent): Theorem = + TacticRewriteSequent(theorem.sequent) + .apply(rewritten) + .map { case (_, reconstruct) => + val scProof = SCProof(reconstruct(), IndexedSeq(sequentToKernel(theorem.sequent))) + theorem.environment.mkTheorem(rewritten, scProof, IndexedSeq(theorem)) + } + .get + } + + case class TacticInstantiateApplyJustification(justified: Justified) extends TacticGoalFunctionalPruning { + def apply(proofGoal: Sequent): Option[(IndexedSeq[Either[Sequent, Justified]], ReconstructSteps)] = { + val patterns = IndexedSeq(PartialSequent(justified.sequent.left, justified.sequent.right)) + val values = IndexedSeq(proofGoal) + matchIndices(Map.empty, patterns, values).flatMap { selector => + // TODO we should instantiate to temporary schemas first otherwise we risk clashing names + unifyAndResolve(patterns, values, patterns, UnificationContext(), selector).map { case (IndexedSeq(resolved), ctx) => + val withFunctions = instantiateSequentSchemas(justified.sequent, ctx.assignedFunctions, Seq.empty, Seq.empty) + val withFunctionsAndPredicates = instantiateSequentSchemas(withFunctions, Seq.empty, ctx.assignedPredicates, Seq.empty) + ( + IndexedSeq(Right(justified)), + () => + IndexedSeq( + KSC.InstFunSchema( + sequentToKernel(withFunctions), + -1, + ctx.assignedFunctions.map(assigned => toKernel(assigned.schema) -> toKernel(assigned.lambda)).toMap + ), + KSC.InstPredSchema( + sequentToKernel(withFunctionsAndPredicates), + 0, + ctx.assignedPredicates.map(assigned => toKernel(assigned.schema) -> toKernel(assigned.lambda)).toMap + ), + KSC.Weakening(sequentToKernel(proofGoal), 1) + ) + ) + } + }.headOption + } + } + +} diff --git a/src/main/scala/lisa/proven/tactics/Destructors.scala b/src/main/scala/lisa/automation/kernel/Destructors.scala similarity index 91% rename from src/main/scala/lisa/proven/tactics/Destructors.scala rename to src/main/scala/lisa/automation/kernel/Destructors.scala index f7b978dd03aa4791385229e1dad97d794fcd9e4a..08736ca779f5425bff4e3f6377bee5de10e82cd4 100644 --- a/src/main/scala/lisa/proven/tactics/Destructors.scala +++ b/src/main/scala/lisa/automation/kernel/Destructors.scala @@ -1,10 +1,9 @@ -package lisa.proven.tactics +package lisa.automation.kernel +import lisa.automation.kernel.ProofTactics.hypothesis import lisa.kernel.fol.FOL.* import lisa.kernel.proof.SCProof import lisa.kernel.proof.SequentCalculus.* -import lisa.kernel.proof.SequentCalculus.* -import lisa.proven.tactics.ProofTactics.hypothesis import lisa.utils.Helpers.* object Destructors { @@ -17,12 +16,14 @@ object Destructors { val p3 = Cut(p.conclusion -> mat.get +> a +> b, p.length - 1, p.length + 2, a \/ b) p withNewSteps IndexedSeq(p0, p1, p2, p3) } + def destructRightAnd(p: SCProof, f: Formula, g: Formula): SCProof = { val p0 = hypothesis(f) // n val p1 = LeftAnd(emptySeq +< (f /\ g) +> f, p.length, f, g) // n+1 val p2 = Cut(p.conclusion -> (f /\ g) -> (g /\ f) +> f, p.length - 1, p.length + 1, f /\ g) p withNewSteps IndexedSeq(p0, p1, p2) } + def destructRightImplies(p: SCProof, f: Formula, g: Formula): SCProof = { // |- f=>g val p0 = hypothesis(f) // f |- f val p1 = hypothesis(g) // g |- g diff --git a/src/main/scala/lisa/proven/tactics/ProofTactics.scala b/src/main/scala/lisa/automation/kernel/ProofTactics.scala similarity index 97% rename from src/main/scala/lisa/proven/tactics/ProofTactics.scala rename to src/main/scala/lisa/automation/kernel/ProofTactics.scala index c75768cb0971e5fe32b5389d62edc75cd795a211..d90b8208ddd2b269f8ed2a17536ba0021505ecb1 100644 --- a/src/main/scala/lisa/proven/tactics/ProofTactics.scala +++ b/src/main/scala/lisa/automation/kernel/ProofTactics.scala @@ -1,4 +1,4 @@ -package lisa.proven.tactics +package lisa.automation.kernel import lisa.kernel.fol.FOL.* import lisa.kernel.proof.SCProof @@ -6,8 +6,6 @@ import lisa.kernel.proof.SequentCalculus.* import lisa.utils.Helpers.{_, given} import lisa.utils.Printer.* -import scala.collection.immutable.Set - /** * SCProof tactics are a set of strategies that help the user write proofs in a more expressive way * by focusing on the final goal rather on the individual steps. @@ -30,6 +28,7 @@ object ProofTactics { case _ => throw new Exception("not a forall") } } + def instantiateForall(p: SCProof, phi: Formula, t: Term*): SCProof = { // given a proof with a formula quantified with \forall on the right, extend the proof to the same formula with something instantiated instead. t.foldLeft((p, phi)) { case ((p, f), t1) => ( @@ -41,19 +40,25 @@ object ProofTactics { ) }._1 } + def instantiateForall(p: SCProof, t: Term): SCProof = instantiateForall(p, p.conclusion.right.head, t) // if a single formula on the right + def instantiateForall(p: SCProof, t: Term*): SCProof = { // given a proof with a formula quantified with \forall on the right, extend the proof to the same formula with something instantiated instead. t.foldLeft(p)((p1, t1) => instantiateForall(p1, t1)) } + def generalizeToForall(p: SCProof, phi: Formula, x: VariableLabel): SCProof = { require(p.conclusion.right.contains(phi)) val p1 = RightForall(p.conclusion -> phi +> forall(x, phi), p.length - 1, phi, x) p appended p1 } + def generalizeToForall(p: SCProof, x: VariableLabel): SCProof = generalizeToForall(p, p.conclusion.right.head, x) + def generalizeToForall(p: SCProof, x: VariableLabel*): SCProof = { // given a proof with a formula on the right, extend the proof to the same formula with variables universally quantified. x.foldRight(p)((x1, p1) => generalizeToForall(p1, x1)) } + def byEquiv(f: Formula, f1: Formula)(pEq: SCProofStep, pr1: SCProofStep): SCProof = { require(pEq.bot.right.contains(f)) require(pr1.bot.right.contains(f1)) @@ -89,6 +94,7 @@ object ProofTactics { })*/ SCProof(v) } + // p1 is a proof of psi given phi, p2 is a proof of psi given !phi def byCase(phi: Formula)(pa: SCProofStep, pb: SCProofStep): SCProof = { val nphi = !phi @@ -99,6 +105,7 @@ object ProofTactics { val p3 = Cut(pa.bot -< leftAphi.get ++ (pb.bot -< leftBnphi.get), 2, 1, nphi) SCProof(IndexedSeq(pa, pb, p2, p3)) } + // pa is a proof of phi, pb is a proof of phi ==> ??? // |- phi ==> psi, phi===>gamma |- phi // ------------------------------------- @@ -141,8 +148,9 @@ object ProofTactics { } case _ => (c, false) } + def detectSubstitutionT(x: VariableLabel, t: Term, s: Term, c: Option[Term] = None): (Option[Term], Boolean) = (t, s) match { - case (y: VariableTerm, z: Term) => { + case (y @ Term(l: VariableLabel, _), z: Term) => { if (isSame(y.label, x)) { if (c.isDefined) { (c, isSame(c.get, z)) @@ -151,7 +159,7 @@ object ProofTactics { } } else (c, isSame(y, z)) } - case (FunctionTerm(la1, args1), FunctionTerm(la2, args2)) if isSame(la1, la2) => { + case (Term(la1, args1), Term(la2, args2)) if isSame(la1, la2) => { args1 .zip(args2) .foldLeft[(Option[Term], Boolean)](c, true)((r1, a) => { diff --git a/src/main/scala/lisa/proven/tactics/SimplePropositionalSolver.scala b/src/main/scala/lisa/automation/kernel/SimplePropositionalSolver.scala similarity index 98% rename from src/main/scala/lisa/proven/tactics/SimplePropositionalSolver.scala rename to src/main/scala/lisa/automation/kernel/SimplePropositionalSolver.scala index 65ce706f5723c6d4c5b5d9ac46bb211b81c0093d..ab43254c250ccfe7fbc06550fc8c14cea36d87a2 100644 --- a/src/main/scala/lisa/proven/tactics/SimplePropositionalSolver.scala +++ b/src/main/scala/lisa/automation/kernel/SimplePropositionalSolver.scala @@ -1,4 +1,4 @@ -package lisa.proven.tactics +package lisa.automation.kernel import lisa.kernel.fol.FOL.* import lisa.kernel.proof.SCProof @@ -27,6 +27,7 @@ object SimplePropositionalSolver { case Iff => if (add) iffs.add(phi) else iffs.remove(phi) case And => if (add) ands.add(phi) else ands.remove(phi) case Or => if (add) ors.add(phi) else ors.remove(phi) + case _ => if (add) others.add(phi) else others.remove(phi) } case _ => if (add) others.add(phi) else others.remove(phi) }) diff --git a/src/main/scala/lisa/proven/mathematics/Mapping.scala b/src/main/scala/lisa/proven/mathematics/Mapping.scala index 70ad8ffe8e83f0994a7f94e12db88c21b6afdf26..337152133287640eeed2e7476d0a4683f6e24208 100644 --- a/src/main/scala/lisa/proven/mathematics/Mapping.scala +++ b/src/main/scala/lisa/proven/mathematics/Mapping.scala @@ -1,7 +1,7 @@ package lisa.proven.mathematics -import lisa.proven.tactics.Destructors.* -import lisa.proven.tactics.ProofTactics.* +import lisa.automation.kernel.Destructors.* +import lisa.automation.kernel.ProofTactics.* import SetTheory.* @@ -9,7 +9,7 @@ import SetTheory.* * This file contains theorem related to the replacement schema, i.e. how to "map" a set through a functional symbol. * Leads to the definition of the cartesian product. */ -object Mapping extends lisa.proven.Main { +object Mapping extends lisa.Main { THEOREM("functionalMapping") of "∀a. (a ∈ ?A) ⇒ ∃!x. ?phi(x, a) ⊢ ∃!X. ∀x. (x ∈ X) ↔ ∃a. (a ∈ ?A) ∧ ?phi(x, a)" PROOF { @@ -23,9 +23,9 @@ object Mapping extends lisa.proven.Main { val X = VariableLabel("X") val B = VariableLabel("B") val B1 = VariableLabel("B1") - val phi = SchematicNPredicateLabel("phi", 2) - val sPhi = SchematicNPredicateLabel("P", 2) - val sPsi = SchematicNPredicateLabel("P", 3) + val phi = SchematicPredicateLabel("phi", 2) + val sPhi = SchematicPredicateLabel("P", 2) + val sPsi = SchematicPredicateLabel("P", 3) val H = existsOne(x, phi(x, a)) val H1 = forall(a, in(a, A) ==> H) @@ -248,8 +248,8 @@ object Mapping extends lisa.proven.Main { val X = VariableLabel("X") val B = VariableLabel("B") val B1 = VariableLabel("B1") - val phi = SchematicNPredicateLabel("phi", 2) - val psi = SchematicNPredicateLabel("psi", 3) + val phi = SchematicPredicateLabel("phi", 2) + val psi = SchematicPredicateLabel("psi", 3) val H = existsOne(x, phi(x, a)) val H1 = forall(a, in(a, A) ==> H) val i1 = thm"functionalMapping" @@ -306,7 +306,7 @@ object Mapping extends lisa.proven.Main { val z1 = VariableLabel("z1") val F = SchematicFunctionLabel("F", 1) val f = VariableLabel("f") - val phi = SchematicNPredicateLabel("phi", 1) + val phi = SchematicPredicateLabel("phi", 1) val g = VariableFormulaLabel("g") val g2 = SCSubproof({ @@ -357,8 +357,8 @@ object Mapping extends lisa.proven.Main { val F = SchematicFunctionLabel("F", 1) val A = VariableLabel("A") val B = VariableLabel("B") - val phi = SchematicNPredicateLabel("phi", 1) - val psi = SchematicNPredicateLabel("psi", 3) + val phi = SchematicPredicateLabel("phi", 1) + val psi = SchematicPredicateLabel("psi", 3) val i1 = thm"lemmaLayeredTwoArgumentsMap" val i2 = thm"applyFunctionToUniqueObject" @@ -376,54 +376,55 @@ object Mapping extends lisa.proven.Main { val B = VariableLabel("B") private val z = VariableLabel("z") val cartesianProduct: ConstantFunctionLabel = - DEFINE("cartProd", A, B) asThe z suchThat { - val a = VariableLabel("a") - val b = VariableLabel("b") - val x = VariableLabel("x") - val x0 = VariableLabel("x0") - val x1 = VariableLabel("x1") - val A = VariableLabel("A") - val B = VariableLabel("B") - exists(x, (z === union(x)) /\ forall(x0, in(x0, x) <=> exists(b, in(b, B) /\ forall(x1, in(x1, x0) <=> exists(a, in(a, A) /\ (x1 === oPair(a, b))))))) - } PROOF { - def makeFunctional(t: Term): Proof = { - val x = VariableLabel(freshId(t.freeVariables.map(_.id), "x")) - val y = VariableLabel(freshId(t.freeVariables.map(_.id), "y")) - val s0 = RightRefl(() |- t === t, t === t) - val s1 = Rewrite(() |- (x === t) <=> (x === t), 0) - val s2 = RightForall(() |- forall(x, (x === t) <=> (x === t)), 1, (x === t) <=> (x === t), x) - val s3 = RightExists(() |- exists(y, forall(x, (x === y) <=> (x === t))), 2, forall(x, (x === y) <=> (x === t)), y, t) - val s4 = Rewrite(() |- existsOne(x, x === t), 3) - Proof(s0, s1, s2, s3, s4) - } + DEFINE("cartProd", A, B) asThe + z suchThat { + val a = VariableLabel("a") + val b = VariableLabel("b") + val x = VariableLabel("x") + val x0 = VariableLabel("x0") + val x1 = VariableLabel("x1") + val A = VariableLabel("A") + val B = VariableLabel("B") + exists(x, (z === union(x)) /\ forall(x0, in(x0, x) <=> exists(b, in(b, B) /\ forall(x1, in(x1, x0) <=> exists(a, in(a, A) /\ (x1 === oPair(a, b))))))) + } PROOF { + def makeFunctional(t: Term): Proof = { + val x = VariableLabel(freshId(t.freeVariables.map(_.id), "x")) + val y = VariableLabel(freshId(t.freeVariables.map(_.id), "y")) + val s0 = RightRefl(() |- t === t, t === t) + val s1 = Rewrite(() |- (x === t) <=> (x === t), 0) + val s2 = RightForall(() |- forall(x, (x === t) <=> (x === t)), 1, (x === t) <=> (x === t), x) + val s3 = RightExists(() |- exists(y, forall(x, (x === y) <=> (x === t))), 2, forall(x, (x === y) <=> (x === t)), y, t) + val s4 = Rewrite(() |- existsOne(x, x === t), 3) + Proof(s0, s1, s2, s3, s4) + } - val a = VariableLabel("a") - val b = VariableLabel("b") - val x = VariableLabel("x") - val A = VariableLabel("A") - val B = VariableLabel("B") - val psi = SchematicNPredicateLabel("psi", 3) + val a = VariableLabel("a") + val b = VariableLabel("b") + val x = VariableLabel("x") + val A = VariableLabel("A") + val B = VariableLabel("B") + val psi = SchematicPredicateLabel("psi", 3) - val i1 = thm"mapTwoArguments" // ∀b. (b ∈ ?B) ⇒ ∀a. (a ∈ ?A) ⇒ ∃!x. ?psi(x, a, b) ⊢ ∃!z. ∃x. (z = U(x)) ∧ ∀x_0. (x_0 ∈ x) ↔ ∃b. (b ∈ ?B) ∧ ∀x1. (x1 ∈ x_0) ↔ ∃a. (a ∈ ?A) ∧ ?psi(x1, a, b) + val i1 = thm"mapTwoArguments" // ∀b. (b ∈ ?B) ⇒ ∀a. (a ∈ ?A) ⇒ ∃!x. ?psi(x, a, b) ⊢ ∃!z. ∃x. (z = U(x)) ∧ ∀x_0. (x_0 ∈ x) ↔ ∃b. (b ∈ ?B) ∧ ∀x1. (x1 ∈ x_0) ↔ ∃a. (a ∈ ?A) ∧ ?psi(x1, a, b) - val s0 = SCSubproof({ - val s0 = SCSubproof(makeFunctional(oPair(a, b))) - val s1 = Weakening((in(b, B), in(a, A)) |- s0.bot.right, 0) - val s2 = Rewrite(in(b, B) |- in(a, A) ==> s0.bot.right.head, 1) - val s3 = RightForall(in(b, B) |- forall(a, in(a, A) ==> s0.bot.right.head), 2, in(a, A) ==> s0.bot.right.head, a) - val s4 = Rewrite(() |- in(b, B) ==> forall(a, in(a, A) ==> s0.bot.right.head), 3) - val s5 = RightForall(() |- forall(b, in(b, B) ==> forall(a, in(a, A) ==> s0.bot.right.head)), 4, in(b, B) ==> forall(a, in(a, A) ==> s0.bot.right.head), b) - Proof(steps(s0, s1, s2, s3, s4, s5)) - }) // ∀b. (b ∈ ?B) ⇒ ∀a. (a ∈ ?A) ⇒ ∃!x. x= (a, b) + val s0 = SCSubproof({ + val s0 = SCSubproof(makeFunctional(oPair(a, b))) + val s1 = Weakening((in(b, B), in(a, A)) |- s0.bot.right, 0) + val s2 = Rewrite(in(b, B) |- in(a, A) ==> s0.bot.right.head, 1) + val s3 = RightForall(in(b, B) |- forall(a, in(a, A) ==> s0.bot.right.head), 2, in(a, A) ==> s0.bot.right.head, a) + val s4 = Rewrite(() |- in(b, B) ==> forall(a, in(a, A) ==> s0.bot.right.head), 3) + val s5 = RightForall(() |- forall(b, in(b, B) ==> forall(a, in(a, A) ==> s0.bot.right.head)), 4, in(b, B) ==> forall(a, in(a, A) ==> s0.bot.right.head), b) + Proof(steps(s0, s1, s2, s3, s4, s5)) + }) // ∀b. (b ∈ ?B) ⇒ ∀a. (a ∈ ?A) ⇒ ∃!x. x= (a, b) - val s1 = InstPredSchema( - instantiatePredicateSchemaInSequent(i1, Map(psi -> LambdaTermFormula(Seq(x, a, b), x === oPair(a, b)))), - -1, - Map(psi -> LambdaTermFormula(Seq(x, a, b), x === oPair(a, b))) - ) - val s2 = Cut(() |- s1.bot.right, 0, 1, s1.bot.left.head) - Proof(steps(s0, s1, s2), imports(i1)) - } using (thm"mapTwoArguments") + val s1 = InstPredSchema( + instantiatePredicateSchemaInSequent(i1, Map(psi -> LambdaTermFormula(Seq(x, a, b), x === oPair(a, b)))), + -1, + Map(psi -> LambdaTermFormula(Seq(x, a, b), x === oPair(a, b))) + ) + val s2 = Cut(() |- s1.bot.right, 0, 1, s1.bot.left.head) + Proof(steps(s0, s1, s2), imports(i1)) + } using (thm"mapTwoArguments") show } diff --git a/src/main/scala/lisa/proven/mathematics/SetTheory.scala b/src/main/scala/lisa/proven/mathematics/SetTheory.scala index 4b43d214ae538cbbaf1736c714114522a34c9811..10a66935a3d9b2717c4d94d5140083fe86548a35 100644 --- a/src/main/scala/lisa/proven/mathematics/SetTheory.scala +++ b/src/main/scala/lisa/proven/mathematics/SetTheory.scala @@ -1,21 +1,19 @@ package lisa.proven.mathematics -import lisa.proven.tactics.Destructors.* -import lisa.proven.tactics.ProofTactics.* +import lisa.automation.kernel.Destructors.* +import lisa.automation.kernel.ProofTactics.* /** * An embryo of mathematical development, containing a few example theorems and the definition of the ordered pair. */ -object SetTheory extends lisa.proven.Main { +object SetTheory extends lisa.Main { THEOREM("russelParadox") of "∀x. (x ∈ ?y) ↔ ¬(x ∈ x) ⊢" PROOF { val y = VariableLabel("y") val x = VariableLabel("x") - val contra = in(y, y) <=> !in(y, y) - val s0 = Hypothesis(contra |- contra, contra) - val s1 = LeftForall(forall(x, in(x, y) <=> !in(x, x)) |- contra, 0, in(x, y) <=> !in(x, x), x, y) - val s2 = Rewrite(forall(x, in(x, y) <=> !in(x, x)) |- (), 1) - Proof(s0, s1, s2) + val s0 = RewriteTrue(in(y, y) <=> !in(y, y) |- ()) + val s1 = LeftForall(forall(x, in(x, y) <=> !in(x, x)) |- (), 0, in(x, y) <=> !in(x, x), x, y) + Proof(s0, s1) } using () thm"russelParadox".show @@ -334,7 +332,7 @@ object SetTheory extends lisa.proven.Main { val y = VariableLabel("y") val z = VariableLabel("z") val h = VariableFormulaLabel("h") - val sPhi = SchematicNPredicateLabel("P", 2) + val sPhi = SchematicPredicateLabel("P", 2) // forall(z, exists(y, forall(x, in(x,y) <=> (in(x,y) /\ sPhi(x,z))))) val i1 = () |- comprehensionSchema val i2 = thm"russelParadox" // forall(x1, in(x1,y) <=> !in(x1, x1)) |- () diff --git a/src/main/scala/lisa/proven/peano_example/Peano.scala b/src/main/scala/lisa/proven/peano_example/Peano.scala index ca74d81554cd7e88723762491490418b85fa8886..90479702b07b208293d19d6e0357e495a03b556e 100644 --- a/src/main/scala/lisa/proven/peano_example/Peano.scala +++ b/src/main/scala/lisa/proven/peano_example/Peano.scala @@ -1,16 +1,16 @@ package lisa.proven.peano_example +import lisa.automation.kernel.ProofTactics.* import lisa.kernel.fol.FOL.* import lisa.kernel.proof.RunningTheory import lisa.kernel.proof.SCProof import lisa.kernel.proof.SequentCalculus.* -import lisa.proven.tactics.ProofTactics.* import lisa.utils.Helpers.{_, given} import lisa.utils.Library import lisa.utils.Printer object Peano { - export PeanoArithmeticsLibrary.{*, given} + export PeanoArithmeticsLibrary.{_, given} /////////////////////////// OUTPUT CONTROL ////////////////////////// given output: (String => Unit) = println @@ -43,7 +43,7 @@ object Peano { val (premise, conclusion) = (inductionInstance.bot.right.head match { case ConnectorFormula(Implies, Seq(premise, conclusion)) => (premise, conclusion) case _ => require(false, s"induction instance should be of the form A => B, got ${Printer.prettyFormula(inductionInstance.bot.right.head)}") - }) + }): @unchecked val baseFormula = baseProof.bot.right.head val stepFormula = inductionStepProof.bot.right.head require( @@ -66,7 +66,7 @@ object Peano { val (y1, z1) = (VariableLabel("y1"), VariableLabel("z1")) - THEOREM("x + 0 = 0 + x") of "∀x. plus(x, zero) === plus(zero, x)" PROOF { + THEOREM("x + 0 = 0 + x") of "?x. plus(x, zero) === plus(zero, x)" PROOF { val refl0: SCProofStep = RightRefl(() |- s(x) === s(x), s(x) === s(x)) val subst1 = RightSubstEq((x === plus(zero, x)) |- s(x) === s(plus(zero, x)), 0, (x, plus(zero, x)) :: Nil, LambdaTermFormula(Seq(y), s(x) === s(y))) val implies2 = RightImplies(() |- (x === plus(zero, x)) ==> (s(x) === s(plus(zero, x))), 1, x === plus(zero, x), s(x) === s(plus(zero, x))) @@ -147,7 +147,7 @@ object Peano { } using (ax"ax4plusSuccessor", ax"ax3neutral", ax"ax7induction") show - THEOREM("switch successor") of "∀y. ∀x. plus(x, s(y)) === plus(s(x), y)" PROOF { + THEOREM("switch successor") of "?y. ?x. plus(x, s(y)) === plus(s(x), y)" PROOF { //////////////////////////////////// Base: x + S0 = Sx + 0 /////////////////////////////////////////////// val base0 = { // x + 0 = x @@ -184,7 +184,7 @@ object Peano { ) } - /////////////// Induction step: ∀y. (x + Sy === Sx + y) ==> (x + SSy === Sx + Sy) //////////////////// + /////////////// Induction step: ?y. (x + Sy === Sx + y) ==> (x + SSy === Sx + Sy) //////////////////// val inductionStep1 = { // x + SSy = S(x + Sy) val moveSuccessor0 = SCSubproof(instantiateForall(instantiateForallImport(ax"ax4plusSuccessor", x), s(y)), IndexedSeq(-2)) diff --git a/src/main/scala/lisa/proven/peano_example/PeanoArithmetics.scala b/src/main/scala/lisa/proven/peano_example/PeanoArithmetics.scala index 03797ff243ab25f83cd4929a13e447d427abe8a1..86fb94675dceb78a0eab75707fb2c8c309edc787 100644 --- a/src/main/scala/lisa/proven/peano_example/PeanoArithmetics.scala +++ b/src/main/scala/lisa/proven/peano_example/PeanoArithmetics.scala @@ -12,7 +12,7 @@ object PeanoArithmetics { final val s = ConstantFunctionLabel("S", 1) final val plus = ConstantFunctionLabel("+", 2) final val times = ConstantFunctionLabel("*", 2) - final val sPhi: SchematicPredicateLabel = SchematicNPredicateLabel("?p", 1) + final val sPhi: SchematicPredicateLabel = SchematicPredicateLabel("?p", 1) final val ax1ZeroSuccessor: Formula = forall(x, !(s(x) === zero)) final val ax2Injectivity: Formula = forall(x, forall(y, (s(x) === s(y)) ==> (x === y))) diff --git a/src/test/scala/lisa/automation/ProofTests.scala b/src/test/scala/lisa/automation/ProofTests.scala new file mode 100644 index 0000000000000000000000000000000000000000..4bc569ba16591ac4b3e6ac17bb15572150c37086 --- /dev/null +++ b/src/test/scala/lisa/automation/ProofTests.scala @@ -0,0 +1,392 @@ +package lisa.automation + +import lisa.automation.Proof2.* +import lisa.front.fol.FOL.* +import lisa.kernel.proof.SCProofChecker +import lisa.utils.Printer +import org.scalatest.Ignore +import org.scalatest.funsuite.AnyFunSuite + +import scala.language.adhocExtensions + +// Not working while the front's unifier is not repaired. +@Ignore +class ProofTests extends AnyFunSuite { + + val (a, b, c) = (SchematicPredicateLabel[0]("a"), SchematicPredicateLabel[0]("b"), SchematicPredicateLabel[0]("c")) + val (s, t, u) = (SchematicTermLabel[0]("s"), SchematicTermLabel[0]("t"), SchematicTermLabel[0]("u")) + val (x, y) = (VariableLabel("x"), VariableLabel("y")) + + private def checkProof(proof: Proof): Unit = { + val emptyEnvironment: ProofEnvironment = newEmptyEnvironment() + val result = evaluateProof(proof)(emptyEnvironment).map(reconstructSCProof) + assert(result.nonEmpty, s"kernel proof was empty for $proof") + val scProof = result.get._1 + val judgement = SCProofChecker.checkSCProof(scProof) + assert(judgement.isValid, Printer.prettySCProof(judgement)) + assert(scProof.imports.isEmpty, s"proof unexpectedly uses imports: ${Printer.prettySCProof(judgement)}") + assert( + lisa.kernel.proof.SequentCalculus.isSameSequent(scProof.conclusion, proof.initialState.goals.head), + s"proof does not prove ${Printer.prettySequent(proof.initialState.goals.head)}:\nPrinter.prettySCProof(judgement)" + ) + } + + test("hypothesis") { + checkProof( + Proof( + (a, b /\ c) |- (b /\ c, b) + )( + RuleHypothesis(RuleParameters().withIndices(0)(1)(0)) + ) + ) + + } + + test("left and") { + checkProof( + Proof( + (a /\ b) |- a + )( + RuleIntroductionLeftAnd(), + RuleHypothesis(RuleParameters().withIndices(0)(0)(0)) + ) + ) + } + + test("right and") { + checkProof( + Proof( + (a, b) |- (a /\ b) + )( + RuleIntroductionRightAnd(RuleParameters().withIndices(0)()(0)), + RuleHypothesis(RuleParameters().withIndices(0)(0)(0)), + RuleHypothesis(RuleParameters().withIndices(0)(1)(0)) + ) + ) + } + + test("left or") { + checkProof( + Proof( + (a \/ b) |- (a, b) + )( + RuleIntroductionLeftOr(RuleParameters().withIndices(0)(0)()), + RuleHypothesis(RuleParameters().withIndices(0)(0)(0)), + RuleHypothesis(RuleParameters().withIndices(0)(0)(1)) + ) + ) + } + + test("right or") { + checkProof( + Proof( + a |- (a \/ b) + )( + RuleIntroductionRightOr(RuleParameters().withIndices(0)()(0)), + RuleHypothesis(RuleParameters().withIndices(0)(0)(0)) + ) + ) + } + + test("left implies") { + checkProof( + Proof( + (a ==> b, a) |- b + )( + RuleIntroductionLeftImplies(RuleParameters().withIndices(0)(0)()), + RuleHypothesis(RuleParameters().withIndices(0)(0)(1)), + RuleHypothesis(RuleParameters().withIndices(0)(1)(0)) + ) + ) + } + + test("right implies") { + checkProof( + Proof( + () |- (a ==> a) + )( + RuleIntroductionRightImplies(RuleParameters().withIndices(0)()(0)), + RuleHypothesis(RuleParameters().withIndices(0)(0)(0)) + ) + ) + } + + test("left iff") { + checkProof( + Proof( + (a <=> b) |- (b ==> a) + )( + RuleIntroductionLeftIff(RuleParameters().withIndices(0)(0)()), + RuleHypothesis(RuleParameters().withIndices(0)(1)(0)) + ) + ) + } + + test("right iff") { + checkProof( + Proof( + (a ==> b, b ==> a) |- (a <=> b) + )( + RuleIntroductionRightIff(RuleParameters().withIndices(0)()(0)), + RuleHypothesis(RuleParameters().withIndices(0)(0)(0)), + RuleHypothesis(RuleParameters().withIndices(0)(1)(0)) + ) + ) + } + + test("left not") { + checkProof( + Proof( + (a, !a) |- b + )( + RuleIntroductionLeftNot(RuleParameters().withIndices(0)(1)()), + RuleHypothesis(RuleParameters().withIndices(0)(0)(1)) // FIXME shouldn't it be 0? + ) + ) + } + + test("right not") { + checkProof( + Proof( + () |- (!a, a) + )( + RuleIntroductionRightNot(RuleParameters().withIndices(0)()(0)), + RuleHypothesis(RuleParameters().withIndices(0)(0)(0)) + ) + ) + } + + test("left =") { + checkProof( + Proof( + () |- (t === t) + )( + RuleEliminationLeftRefl(RuleParameters().withFunction(Notations.s, t)), + RuleHypothesis() + ) + ) + } + + test("right =") { + checkProof( + Proof( + () |- (t === t) + )( + RuleIntroductionRightRefl(RuleParameters().withFunction(Notations.s, t)) + ) + ) + } + + test("introduction higher order") { + checkProof( + Proof( + forall(x, u === x) |- (u === s) + )( + RuleIntroductionLeftForall( + RuleParameters() + .withPredicate(Notations.p, x => u === x) + .withFunction(Notations.t, s) + ), + RuleHypothesis() + ) + ) + } + + test("right forall right or") { + checkProof( + Proof( + a |- forall(x, (u === x) \/ a) + )( + RuleIntroductionRightForall( + RuleParameters() + .withPredicate(Notations.p, x => (u === x) \/ a) + ), + RuleIntroductionRightOr(), + RuleHypothesis() + ) + ) + } + + test("left exists left and") { + checkProof( + Proof( + exists(x, (s === x) /\ a) |- a + )( + RuleIntroductionLeftExists( + RuleParameters() + .withPredicate(Notations.p, x => (s === x) /\ a) + ), + RuleIntroductionLeftAnd(), + RuleHypothesis() + ) + ) + } + + test("right exists") { + checkProof( + Proof( + (s === t) |- exists(x, s === x) + )( + RuleIntroductionRightExists( + RuleParameters() + .withPredicate(Notations.p, s === _) + .withFunction(Notations.t, t) + ), + RuleHypothesis() + ) + ) + } + + test("left subst =") { + checkProof( + Proof( + (s === t, u === t) |- (u === s) + )( + RuleIntroductionLeftSubstEq( + RuleParameters() + .withPredicate(Notations.p, u === _) + ), + RuleHypothesis() + ) + ) + } + + test("right subst =") { + checkProof( + Proof( + (s === t, u === s) |- (u === t) + )( + RuleIntroductionRightSubstEq( + RuleParameters() + .withPredicate(Notations.p, u === _) + ), + RuleHypothesis() + ) + ) + } + + test("left subst iff") { + checkProof( + Proof( + (a <=> b, c <=> b) |- (c <=> a) + )( + RuleIntroductionLeftSubstIff( + RuleParameters() + .withConnector(Notations.f, c <=> _) + ), + RuleHypothesis() + ) + ) + } + + test("right subst iff") { + checkProof( + Proof( + (a <=> b, c <=> a) |- (c <=> b) + )( + RuleIntroductionRightSubstIff( + RuleParameters() + .withConnector(Notations.f, c <=> _) + ), + RuleHypothesis() + ) + ) + } + + test("elimination left subst iff") { + checkProof( + Proof( + (s === t) |- (t === s) + )( + RuleEliminationLeftSubstIff( + RuleParameters() + .withConnector(Notations.f, identity) + .withPredicate(Notations.a, t === s) + .withPredicate(Notations.b, s === t) + ), + RuleHypothesis(), + TacticalRewrite((s === t) |- (s === t)), + RuleHypothesis() + ) + ) + } + + test("elimination right subst iff") { + checkProof( + Proof( + (s === t) |- (t === s) + )( + RuleEliminationRightSubstIff( + RuleParameters() + .withConnector(Notations.f, identity) + .withPredicate(Notations.a, s === t) + .withPredicate(Notations.b, t === s) + ), + RuleHypothesis(), + TacticalRewrite((s === t) |- (s === t)), + RuleHypothesis() + ) + ) + } + + test("elimination left subst =") { + checkProof( + Proof( + (s === t, t === u) |- (s === u) + )( + RuleEliminationLeftSubstEq( + RuleParameters() + .withPredicate(Notations.p, _ === u) + .withFunction(Notations.s, s) + .withFunction(Notations.t, t) + ), + RuleHypothesis(), + RuleHypothesis() + ) + ) + } + + test("elimination right subst =") { + checkProof( + Proof( + (s === t, t === u) |- (s === u) + )( + RuleEliminationRightSubstEq( + RuleParameters() + .withPredicate(Notations.p, _ === u) + .withFunction(Notations.s, t) + .withFunction(Notations.t, s) + ), + RuleHypothesis(), + TacticalRewrite((s === t, t === u) |- (s === t)), + RuleHypothesis() + ) + ) + } + + test("environment") { + val ctx = newEmptyEnvironment() + + val thm1 = ctx.mkTheorem( + Proof( + () |- ((a /\ b) ==> (b /\ a)) + )( + TacticSolverNative + ) + ) + + val thm2 = ctx.mkTheorem( + Proof( + () |- ((b /\ a) ==> (a /\ b)) + )( + TacticSolverNative + ) + ) + + val thm3 = RuleIntroductionRightIff(thm1, thm2).get + + val reconstructed = reconstructSCProofForTheorem(thm3) + + assert(SCProofChecker.checkSCProof(reconstructed).isValid, Printer.prettySCProof(reconstructed)) + } +} diff --git a/src/test/scala/lisa/examples/ExampleTests.scala b/src/test/scala/lisa/examples/ExampleTests.scala new file mode 100644 index 0000000000000000000000000000000000000000..95057ebac07b6ec5aa94d3b09fdfd7fb995aecc4 --- /dev/null +++ b/src/test/scala/lisa/examples/ExampleTests.scala @@ -0,0 +1,29 @@ +package lisa.examples + +import org.scalatest.funsuite.AnyFunSuite + +import scala.language.adhocExtensions + +class ExampleTests extends AnyFunSuite { + /* + test("front interactive proof (1)") { + frontInteractiveProof1() + } + + test("front interactive proof (2)") { + frontInteractiveProof2() + } + + test("front matching") { + frontMatching() + } + + test("front parsing & printing") { + frontParsingPrinting() + } + + test("front solver") { + frontSolver() + } + */ +} diff --git a/src/test/scala/lisa/proven/InitialProofsTests.scala b/src/test/scala/lisa/proven/InitialProofsTests.scala index 8872b0414d391064d3290379ce7acee8ffa06699..09cfd94329a87b62d8f1fc29ac073b37372cbae1 100644 --- a/src/test/scala/lisa/proven/InitialProofsTests.scala +++ b/src/test/scala/lisa/proven/InitialProofsTests.scala @@ -4,7 +4,7 @@ import lisa.test.ProofCheckerSuite import lisa.utils.Printer class InitialProofsTests extends ProofCheckerSuite { - import lisa.proven.SetTheoryLibrary.* + import lisa.settheory.SetTheoryLibrary.* test("File SetTheory initialize well") { lisa.proven.mathematics.SetTheory diff --git a/src/test/scala/lisa/proven/SimpleProverTests.scala b/src/test/scala/lisa/proven/SimpleProverTests.scala index e6ba8d3f3716600d5bcfe1bd49abf6b6b2a25d8b..db01086dff856649378297a17b493dd4aa3e9bd8 100644 --- a/src/test/scala/lisa/proven/SimpleProverTests.scala +++ b/src/test/scala/lisa/proven/SimpleProverTests.scala @@ -1,10 +1,10 @@ package lisa.proven +import lisa.automation.kernel.SimplePropositionalSolver as SPS import lisa.kernel.fol.FOL.* import lisa.kernel.proof.RunningTheory import lisa.kernel.proof.RunningTheory.PredicateLogic import lisa.kernel.proof.SCProofChecker -import lisa.proven.tactics.SimplePropositionalSolver as SPS import lisa.tptp.KernelParser.* import lisa.tptp.ProblemGatherer.getPRPproblems import lisa.utils.Helpers.*