From 364d0645253c5db37412a194a850ec9ef16a4a53 Mon Sep 17 00:00:00 2001
From: Tim Daly
Date: Wed, 12 Dec 2018 13:38:01 0500
Subject: [PATCH] books/bookvol13
Goal: Literate Programming
add section

books/bookvol13.pamphlet  233 +++++
changelog  2 +
patch  2206 +
src/axiomwebsite/patches.html  2 +
4 files changed, 240 insertions(+), 2203 deletions()
diff git a/books/bookvol13.pamphlet b/books/bookvol13.pamphlet
index 4eaca41..e184418 100644
 a/books/bookvol13.pamphlet
+++ b/books/bookvol13.pamphlet
@@ 41,6 +41,16 @@ proper algorithm.}\\
\end{quote}
\begin{quote}
+When correctness concerns come as an afterthought and correctness
+proofs have to be given once the program is already completed, the
+programmer can indeed expect severe troubles. If, however, he adheres
+to the discipline to produce the correctness proofs as he programs
+along [my italics], he will produce program and proof with less effort
+than programming alone would have taken.
+ Edsgar Dijkstra \cite{Dijk70}
+\end{quote}
+
+\begin{quote}
{\bf If what you are working on is not important, why are you working on it?}\\
 Richard Hamming \cite{Hamm95}
\end{quote}
@@ 238,6 +248,12 @@ hasn't yet been solved, but the time for solving it has come now.}\\
 Savas Dimopoulos; Stanford University
\end{quote}
+\begin{quote}
+{\bf Every morning one should wake up and reflect on the conceptual
+and foundational significance of one's work.}\\
+ Harvey Friedman
+\end{quote}
+
This is an effort to prove Axiom correct. That is, we wish to prove
that the algorithms provide correct, trustworthy answers.
@@ 483,6 +499,146 @@ Bowen and Hinchey \cite{Bowe95} follow up with 7 more myths.
\item Formal methods people always use formal methods
\end{enumerate}
+Murray and Oorschot \cite{Murr18} provide comments that, while they
+are indented for securityrelated proofs, also apply to proofs in
+general. We paraphrase the comments that we feel are relevant to all
+proofs.
+
+Proofs have been used as a means to aid the development of systems for
+at least the past five decades. There has been much debate about
+proof's role in developing software and systems, including its
+practicality for mainstream software and relevance for reasoning about
+the realworld.
+
+There has been a rash of largescale software verification projects
+that have resulted in the construction of software systems, whose
+assurance is backed by (machinechecked) proofs about the software's
+behaviour.
+
+As an assurance mechanism proof remains poorly understood expecially
+outside of formal verification practitioners and researchers.
+
+Proof surely delivers many benefits, and examples abound of defects
+and vulnerabilities found during the process of proving (a property
+about) a system that led to improvements in the system that might have
+been unlikely otherwise. Proof also brings with it its own
+{\sl side effects} (code changes and deployment constraints), not all
+of which may be positive for the system and some of which might even
+be negative (aside from on performance).
+
+There is both confusion, and wide veriety of interpretations by
+different stakeholders and communities, about the value and role of
+formal proofs in the construction of systems.
+
+What is the value of a formal proof if it cannot establish properties
+of the real system?
+
+The most common interpretation of the meaning of a proof is that it
+provides {\sl guarantees} about a system. This perspective is perhaps
+strongest {\sl outside} of the formal methods community. {\sl Within}
+the formal methods community  i.e., those practising formal
+verification  things are more nuanced. This apparent contraiction 
+frequent in the literature, and not to be discounted as a source of
+confusion  is resolved by observing that a proof provides guarantees
+subject to the accuracy of the model and proof assumptions, i.e.,
+provides guarantees about the real world when, and only when, the
+formal model of the system matches the system's realworld behaviour
+with respect tot he property being proven.
+
+All proofs have assumptions. Yet, as noted by DeMillo, Lipton, and
+Perlis in their infamous critique of formal methods \cite{Demi79},
+what sets formal verification of software apart from proofs in pure
+mathematics is their sheer volume of assumptions.
+
+One conclusion about the value of formal proofs is that they allow an
+evaluator to concentrate their efforts on validating the accuracy of
+the proof's assumptions, enabling them to ignore large partf of the
+implementation. The value of a formal guarantee is that it
+concentrates remaining doubt on the assumptions.
+
+An alternaitve perspective is that the value in proofs is not to
+provide (qualified) guarantees, but instead to force careful and
+rigorous understanding of a systme and its operation. There is no
+question that proof processes generate more robust code. Yet the
+general consensus seems to be that this robustness is much more an
+emergent consequence of rigorously understanding the problem than a
+result of the proof discharge. If discharging the entire proof is
+merely a side effect, we note that under this point of view even
+{\sl partial} proofs have value. The same is less clear for the
+viewpoint of proofs as qualified guarantees, since an unfinished proof
+provides no quarantees about the formal object under study.
+
+Interative proofs provide a critical mechanism for performing
+structured, interactive exploration of a system (via a formal
+model). Carrying out the proof forces the human to carefully examine
+each part of the system model, and allows human intuition to bring to
+light issues that might be missed by the proof itself.
+
+Proofs have some established ideas on their benefits: providing
+qualified guarantees, allowing auditors to concentrate effor on
+validating proof assumptions and the formal model; providing a means
+for structured exploration of a system to better understand and
+improve it.
+
+Yet against these benefits stand proof's disadvantages, which as with
+proof itself, remain poorly understood expecially outside the formal
+methods community. One is proof {\sl brittleness}: e.g., changing one
+line of code can potentially invalidate large amounts of formal
+reasoning. It is difficult to judge the guarantees provided by a proof
+when even {\sl one} proof assumption deviates from reality. Another is
+a dearth of techniques for reliably predicting the cost of formal
+methods. Still another is the difficulty that both nonexperts and
+experts have in discerning precisely the nature of the formal property
+that a proof establishes, and the difficulty of validating implicit
+assumptions in largescale mormal models on which such proofs
+rest. This leaves plenty of room for gaps between what nonexpert
+might {\sl think} has been proven and the precise formal property
+{\sl acutally} proven.
+
+There are effects {\sl on the system}, both beneficial and otherwise,
+for which proof is a proximate cause  i.e., concrete effects on the
+system brought about directly as a result of performing the proof. We
+refer to these as the {\sl side effects} of a proof.
+
+A proof might {\sl induce} two kinds of side effects on the system.
+One is {\sl changes to the code} either to fix a bug found during the
+proof or to modify some aspect of the design or implementation to open
+a path enabling a proof. A second is {\sl deployment constraints}
+imposed on the system to enforce {\sl environmental assumptions}
+necessary for proofs.
+
+\section{General problems with formal verification}
+
+In Moy and Wallenburg \cite{Moyx10} we find:
+\begin{enumerate}
+\item Formal verification is not applied to all parts of a
+system. Around a formally verified core, there are usually parts of
+the system that cannot be verified formally, because they deal with
+interfacing with other systems or a user, or because it would be too
+costly to verify them (effort/time/money). The cost/benefit ratio of
+formal verification is parricularly important to take into account for
+those less critical parts.
+\item Formal verification is only as strong as the specification
+is. When formally verifying the absence of runtime errors in
+programs, the specification is given by the semantics of the
+programming language. However, in general there is no way to ``guess''
+the intended meaning of some part of a system, but the developer has
+to indicate it in a specificaiton. Writing specifications can be as
+errorprone as writing programs. In practice, these specifcations are
+not complete functional specifications, as this would be much too
+costly. Only the properties that a developer specified can be formally
+verified.
+\item Formal verification techniques do not target all desirable
+properties of programs. Most formal verification frameworks verify the
+absence of runtime errors, and perperties expressed by developers are
+invariants or function contracts. In particular, formal verification
+techniques are not usually good at detecting dead code and ``strange''
+code (inefficient or unidiomatic code), or covert channels.
+\item The formal verification method itself may be flawed, for example
+if the immplementation of the underlying theorem prover or static
+analyzer contains problems.
+\end{enumerate}
+
\chapter{Progress Will Occur}
At the very lowest level there have been some truly impressive steps
@@ 1445,6 +1601,42 @@ call $gcd y (x{\rm\ mod\ }y)$, the recursion variant $x{\rm\ mod\ }y$
is less than $y$, the variant for $gcd x y$.
\end{itemize}
+\chapter{GCD in COQ}
+
+\begin{verbatim}
+Fixpoint Pgcdn (n: nat) (a b : positive) : positive :=
+ match n with
+  O => 1
+  S n =>
+ match a,b with
+  xH, _ => 1
+  _, xH => 1
+  xO a, xO b => xO (Pgcdn n a b)
+  a, xO b => Pgcdn n a b
+  xO a, b => Pgcdn n a b
+  xI a', xI b' =>
+ match Pcompare a' b' Eq with
+  Eq => a
+  Lt => Pgcdn n (b'a') a
+  Gt => Pgcdn n (a'b') b
+ end
+ end
+ end.
+
+Definition Zgcd (a b : Z) : Z :=
+ match a,b with
+  Z0,_ => Zabs b
+  _,Z0 => Zabs a
+  Zpos a, Zpos b => Zpos (Pgcd a b)
+  Zpos a, Zneg b => Zpos (Pgcd a b)
+  Zneg a, Zpos b => Zpos (Pgcd a b)
+  Zneg a, Zneg b => Zpos (Pgcd a b)
+ end.
+
+Lemma Zgcd_is_gcd : forall a b, Zis_gcd a b (Zgcd a b).
+
+\end{verbatim}
+
\chapter{GCD in Nuprl by Anne Trostle}
Quoted from \cite{Tros13}:
@@ 4843,6 +5035,47 @@ at some point before the loop using such a label.
Fr\"uhwirht \cite{Frue91} details optimistic type systems for logic
programs.
+\subsection{Gurevich \cite{Gure12}}
+
+What is an Algorithm?
+
+Turing's analysis of computation was a stroke of genius.
+The analysis is extremely famous and yet it is often misunderstood.
+
+Some people think that every computable function, total or partial,
+can be computed by a Turing machine. This is not so, and here are some
+counterexamples. Consider Euclid's algorithm for computing the
+greatest common divisor $d=gcd(a,b)$ of two natural numbers $a,b$.
+\begin{verbatim}
+ let M = max(a,b), m = min(a,b)
+ while M > m do
+ M,m := max(Mm,m, min(Mm,m)
+ d := M
+\end{verbatim}
+
+The gcd function on natural numbers is of course Turing computable,
+but the algorithm was also applied  in theory and in practice  to
+the lengths of segments of a straight line, which gives rise to a
+computable partial function (the algorithm does not terminate if the
+two given lengths are incommensurate) that is not Turing computable
+because you cannot place an arbitrary length on the Turing tape. More
+generally, the functions computed by rulerandconpass algorithms are
+not Turing computable. And let us emphasize that ruler and compass
+were practical tools in ancient Greece and that a number of rule and
+compass algorithms were practical algorithms.
+
+Moschovakis \cite{Mosc84} discusses Euclid's algorithm for the greatest common divisor of two natural numbers. Then he says:
+
+Following the drive of the discussion, we might be expected at this
+point to simply identify the Euclidean algorithm with the funcitonal
+gcd. We will not go quite that far, because the time0honored intuitive
+concept of algorithm carries many linguistic and intensional
+connotations (some of them tied up with implementations) with which we
+have not concerned ourselves. Instead we will make the weaker (and
+almost trivial) claim that {\sl the functional gcd embodies all the
+essential mathematical properties of the Euclidean algorithm}.
+
+
\subsection{Harrison \cite[p13]{Harr98}}
There are several examples of computer algebra results which may be
diff git a/changelog b/changelog
index 2942860..b1a0746 100644
 a/changelog
+++ b/changelog
@@ 1,3 +1,5 @@
+20181211 tpd src/axiomwebsite/patches.html 20181211.02.tpd.patch
+20181211 tpd books/bookvol13 add section
20181211 tpd src/axiomwebsite/patches.html 20181211.01.tpd.patch
20181211 tpd books/bookvolbib add references
20181211 tpd readme add quote
diff git a/patch b/patch
index 1e60341..d8ee8c8 100644
 a/patch
+++ b/patch
@@ 1,2205 +1,5 @@
books/bookvolbib add references
+books/bookvol13
Goal: Proving Axiom Sane
+Goal: Literate Programming
\index{Murray, Toby}
\index{van Oorcchot, P.C.}
\begin{chunk}{axiom.bib}
@misc{Murr18,
 author = "Murray, Toby and van Oorcchot, P.C.",
 title = {{BP: Formal Proofs, the Fine Print and Side Effects}},
 link = "\url{https://people.eng.unimelb.edu.au/tobym/papers/secdev2018.pdf}",
 comment = "Version: 26 June 2018 to appear in IEEE SecDev 2018",
 abstract =
 "Given recent highprofile successes in formal verification of
 securityrelated properties (e.g. for seL4), and the rising
 popularity of applying formal methods to cryptographic libraries
 and security protocols like TLS, we revisit the meaning of
 securityrelated proofs about software. We reexamine old issues,
 and identify new questions that have escaped scrutiny in the
 formal methods literature. We consider what value proofs about
 software systems deliver to endusers (e.g. in terms of net
 assurance benefits), and at what cost in terms of side effects
 (such as changes made to software to facilitate the proofs, and
 assumptionrelated deployment restrictions imposed on software if
 these proofs are to remain valid in operation). We consider in
 detail, for the first time to our knowledge, possible
 relationships between proofs and side effects. To make our
 discussion concrete, we draw on tangible examples, experience, and
 the literature.",
 paper = "Murr18.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Filliatre, JeanChristophe}
\begin{chunk}{axiom.bib}
@misc{Fill13a,
 author = "Filliatre, JeanChristophe",
 title = {{Deductive Program Verification}},
 year = "2013",
 comment = "slides"
}

\end{chunk}

\index{Fredrikson, Matt}
\begin{chunk}{axiom.bib}
@misc{Fred16,
 author = "Fredrikson, Matt",
 title = {{Automated Program Verification and Testing}},
 comment = "slides",
 year = "2016"
}

\end{chunk}

\index{Clochard, Martin}
\index{Gondelman, Leon}
\index{Pereira, Mario}
\begin{chunk}{axiom.bib}
@article{Cloc18,
 author = "Clochard, Martin and Gondelman, Leon and Pereira, Mario",
 title = {{The Matrix Reproved}},
 journal = "Journal of Automated Reasoning",
 volume = "60",
 number = "3",
 pages = "365383",
 year = "2018",
 link = "\url{https://hal.inria.fr/hal01617437/document}",
 abstract =
 "In this paper we describe a complete solution for the
 first challenge of the VerifyThis 2016 competition held at the 18th
 ETAPS Forum. We present the proof of two variants for the
 multiplication of matrices: a naive version using three nested loops
 and Strassen's algorithm. The proofs are conducted using the Why3
 platform for deductive program verification and automated theorem
 provers to discharge proof obligations. In order to specify and
 prove the two multiplication algorithms, we develop a new Why3
 theory of matrices. In order to prove the matrix identities on which
 Strassen's algorithm is based, we apply the proof by reflection
 methodology, which we implement using ghost state. To our knowledge,
 this is the first time such a methodology is used under an
 autoactive setting.",
 paper = "Cloc18.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Balzer, Stephanie}
\index{Pfenning, Frank}
\index{Toninho, Bernardo}
\begin{chunk}{axiom.bib}
@article{Balz18,
 author = "Balzer, Stephanie and Pfenning, Frank and Toninho, Bernardo",
 title = {{A Universal Session Type for Untyped Asynchronous Communication}},
 journal = "CONCUR",
 volume = "30",
 number = "1",
 year = "2018",
 link = "\url{https://www.cs.cmu.edu/~fp/papers/univtype18.pdf}",
 abstract =
 "In the simplytyped $\lambda$calculus we can recover the full
 range of expressiveness of the untyped $\lambda$calculus solely
 by adding a single recursive type
 $\mathscr{U} = \mathscr{U}\rightarrow \mathscr{U}$. In contract,
 in the sessiontyped $\Pi$calculus, recursion alone is
 insufficient to recover the untyped $\Pi$calculus, primarily due
 to linearity: each channel just has two unique endpoints. In this
 paper, we show that shared channels with a corresponding sharing
 semantics (base on the the language SILL$_s$ developed in prior
 work) are enough to embed the untyped asynchronous $\Pi$calculus
 via a universal shared session type $\mathscr{U}_s$. We show that
 our encoding of the asynchronous $\Pi$calculus satisfies
 operational correspondence and preserves observable actions (i.e.,
 processes are weakly bisimilar to their encoding). Moreover, we
 clarify the expressiveness of SILL$_s$ by developing an
 operationally correct encoding of SILL$_s$ in the asynchronous
 $\Pi$calculus.",
 paper = "Balz18.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Balzer, Stephanie}
\index{Pfenning, Frank}
\begin{chunk}{axiom.bib}
@inproceedings{Balz17,
 author = "Balzer, Stephanie and Pfenning, Frank",
 title = {{Manifest Sharing with Session Types}},
 booktitle = "Proc. ACM Program. Lang.",
 publisher = "ACM",
 year = "2017"
 link = "\url{https://www.cs.cmu.edu/~fp/papers/icfp17.pdf}",
 abstract =
 "Sessiontyped languages building on the CurryHoward isomorphism
 between linear logic and sessiontyped communication guarantee session
 fidelity and deadlock freedom. Unfortunately, these strong guarantees
 exclude many naturally occurring programming patterns pertaining to
 shared resources. In this paper, we introduce sharing into a
 sessiontyped language where types are stratified into linear and
 shared layers with modal operators connecting the layers. The
 resulting language retains session fidelity but not the absence of
 deadlocks, which can arise from contention for shared processes. We
 illustrate our language on various examples, such as the dining
 philosophers problem, and provide a translation of the untyped
 asynchronous $\Pi$calculus into our language.",
 paper = "Balz17.pdf",
 keywords = "printed"
}

\end{chunk}

\begin{chunk}{axiom.bib}
@article{SCSCP10,
 author = "Unknown",
 title = {{Symbolic Computation Software Composability Project and
 its implementations}},
 journal = "ACM Comm. in Computer Algebra",
 volume = "44",
 number = "4",
 year = "2010",
 keywords = "printed"
}

\end{chunk}

\index{Bohrer, Brandon}
\index{Crary, Karl}
\begin{chunk}{axiom.bib}
@article{Bohr17,
 author = "Bohrer, Brandon and Crary, Karl",
 title = {{TWAM: A Certifying Abstract Machine for Logic Programs}},
 journal = "ACM Trans. on Computational Logic",
 volume = "1",
 number = "1",
 year = "2017",
 link = "\url{https://arxiv.org/pdf/1801.00471.pdf}",
 abstract =
 "Typepreserving (or typed) compilation uses typing derivations to
 certify correctness properties of compilation. We have designed and
 implemented a typepreserving compiler for a simplytyped dialect of
 Prolog we call TProlog. The crux of our approach is a new certifying
 abstract machine which we call the Typed Warren Abstract Machine
 (TWAM). The TWAM has a dependent type system strong enough to specify
 the semantics of a logic program in the logical framework LF. We
 present a soundness metatheorem which constitutes a partial
 correctness guarantee: welltyped programs implement the logic
 program specified by their type. This metatheorem justifies our design
 and implementation of a certifying compiler from TProlog to TWAM.",
 paper = "Bohr17.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Dershowitz, Nachum}
\index{Gurevich, Yuri}
\begin{chunk}{axiom.bib}
\article{Ders08,
 author = "Dershowitz, Nachum and Gurevich, Yuri",
 title = {{A Natural Axiomatization of Computability and Proof of
 Church's Thesis}},
 journal = "Bulletin of Symbolic Logic",
 volume = "14",
 number = "3",
 year = "2008",
 abstract =
 "Church’s Thesis asserts that the only numeric functions that can be
 calculated by effective means are the recursive ones, which are the
 same, extensionally, as the Turing computable numeric functions. The
 Abstract State Machine Theorem states that every classical algorithm
 is behaviorally equivalent to an abstract state machine. This theorem
 presupposes three natural postulates about algorithmic
 computation. Here, we show that augmenting those postulates with an
 additional requirement regarding basic operations gives a natural
 axiomatization of computability and a proof of Church’s Thesis, as
 Gödel and others suggested may be possible. In a similar way, but with
 a different set of basic oper ations, one can prove Turing’s Thesis,
 characterizing the effective string functions, and—in particular—the
 effectivelycomputable functions on string representations of numbers.",
 paper = "Ders08.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Boas, Peter van Emde}
\begin{chunk}{axiom.bib}
@article{Boas12,
 author = "Boas, Peter van Emde",
 title = {{Turing Machines for Dummies}}
 journal = "LNCS",
 volume = "7147",
 pages = "1430",
 year = "2012",
 abstract =
 "Various methods exists in the litearture for denoting the con
 figuration of a Turing Machine. A key difference is whether the head
 position is indicated by some integer (mathematical representation) or
 is specified by writing the machine state next to the scanned tape
 symbol (intrinsic representation).

 From a mathematical perspective this will make no difference. How
 ever, since Turing Machines are primarily used for proving undecidability
 and/or hardness results these representations do matter. Based on
 a number of applications we show that the intrinsic representation
 should be preferred."
 paper = "Boas12.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Gurevich, Yuri}
\begin{chunk}{axiom.bib}
@article{Gure12,
 author = "Gurevich, Yuri",
 title = {{What Is An Algorithm?}},
 journal = "LNCS",
 volume = "7147",
 pages = "3142",
 year = "2012",
 abstract =
 "We attempt to put the title problem and the ChurchTuring thesis into
 a proper perspective and to clarify some common misconcep tions
 related to Turing’s analysis of computation. We examine two approaches
 to the title problem, one wellknown among philosophers and another
 among logicians.",
 paper = "Gure12.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Moschovakis, Y.N.}
\begin{chunk}{axiom.bib}
@article{Mosc84,
 author = "Moschovakis, Y.N.",
 title = {{Abstract Recursion as a Foundation of the Theory of Algorithms}},
 journal = "LNCS",
 volume = "1104",
 pages = "289364",
 publisher = "Springer",
 year = "1984",
 abstract =
 "The main object of this paper is to describe an abstract,
 (axiomatic) theory of recursion and its connection with some of
 the basic, foundational questions of computer science.",
 paper = "Mosc84.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Pittman, Dan}
\begin{chunk}{axiom.bib}
@misc{Pitt18,
 author = "Pittman, Dan",
 title = {{Proof Theory Impressionism: Blurring the CurryHoward Line}},
 link = "\url{https://www.youtube.com/watch?v=jrVPBAd5Gc}",
 year = "2018",
 comment = "Strange Loop 2018 Conference"
}

\end{chunk}

\index{Polikarpova, Nadia}
\begin{chunk}{axiom.bib}
@misc{Poli18,
 author = "Polikarpova, Nadia",
 title = {{TypeDriven Program Synthesis}},
 link = "\url{https://www.youtube.com/watch?v=HnOix9TFy1A}",
 year = "2018",
 abstract =
 "A promising approach to improving software quality is to enhance
 programming languages with declarative constructs, such as logical
 specifications or examples of desired behavior, and to use program
 synthesis technology to translate these constructs into efficiently
 executable code. In order to produce code that provably satisfies a
 rich specification, the synthesizer needs an advanced program
 verification mechanism that is sufficiently expressive and fully
 automatic. In this talk, I will present a program synthesis technique
 based on refinement type checking: a verification mechanism that
 supports automatic reasoning about expressive properties through a
 combination of types and SMT solving.

 The talk will present two applications of typedriven synthesis. The
 first one is a tool called Synquid, which creates recursive functional
 programs from scratch given a refinement type as input. Synquid is the
 first synthesizer powerful enough to automatically discover programs
 that manipulate complex data structures, such as balanced trees and
 propositional formulas. The second application is a language called
 Lifty, which uses typedriven synthesis to repair information flow
 leaks. In Lifty, the programmer specifies expressive information flow
 policies by annotating the sources of sensitive data with refinement
 types, and the compiler automatically inserts access checks necessary
 to enforce these policies across the code."
}

\end{chunk}

\index{van der Walt, Paul}
\begin{chunk}{axiom.bib}
@mastersthesis{Walt12,
 author = "van der Walt, Paul",
 title = {{Reflection in Agda}},
 school = "Utrecht University",
 year = "2012",
 abstract =
 "This project explores the recent addition to Agda enabling
 reflection, in the style of Lisp, MetaML, and Template Haskell. It
 illustrates several possible applications of reflection that arise in
 dependently typed programming, and details the limitations of the
 current implementation of reflection. Examples of typesafe
 metaprograms are given that illustrate the power of reflection coupled
 with a dependently typed language. Among other things the limitations
 inherent in having quote and unquote implemented as keywords are
 highlighted. The fact that lambda terms are returned without typing
 information is discussed, and a solution is presented. Also provided
 is a detailed users’ guide to the reflection API and a library of
 working code examples to illustrate how various common tasks can be
 performed, along with suggestions for an updated reflection API in a
 future version of Agda.",
 paper = "Walk12.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Christiansen, David Thrane}
\begin{chunk}{axiom.bib}
@misc{Chri18,
 author = "Christiansen, David Thrane",
 title = {{A Little Taste of Dependent Types}},
 year = "2018",
 link = "\url{https://www.youtube.com/watch?v=VxINoKFmS4}",
 abstract =
 "Dependent types let us use the same programming language for
 compiletime and runtime code, and are inching their way towards the
 mainstream from research languages like Coq, Agda and Idris. Dependent
 types are useful for programming, but they also unite programming and
 mathematical proofs, allowing us to use the tools and techniques we
 know from programming to do math.

 The essential beauty of dependent types can sometimes be hard to find
 under layers of powerful automatic tools. The Little Typer is an
 upcoming book on dependent types in the tradition of the The Little
 Schemer that features a tiny dependently typed language called Pie. We
 will demonstrate a proof in Pie that is also a program."
}

\end{chunk}

\index{Gurevich, Yuri}
\begin{chunk}{axiom.bib}
@article{Gure18,
 author = "Gurevich, Yuri",
 title = {{ChurchTuring Thesis Cannot Possibly Be True}},
 year = "2018",
 link =
 "\url{https://www.microsoft.com/enus/research/video/churchturingthesiscannotpossiblybetrue/}",
 abstract =
 "The thesis asserts this: If an algorithm A computes a partial
 function f from natural numbers to natural numbers then f is partially
 recursive, i.e., the graph of f is recursively enumerable.

 The thesis has been formulated in 1930s. The only algorithms at the
 time were sequential algorithms. Sequential algorithms were
 axiomatized in 2000. This axiomatization was used in 2008 to prove the
 thesis for sequential algorithms, i.e., for the case where A ranges
 over sequential algorithms.

 These days, in addition to sequential algorithms, there are parallel
 algorithms, distributed algorithms, probabilistic algorithms, quantum
 algorithms, learning algorithms, etc.

 The question whether the thesis is true in full generality is actively
 discussed from 1960s. We argue that, in full generality, the thesis
 cannot possibly be true."
}

\end{chunk}

\index{Ellis, Ferris}
\begin{chunk}{axiom.bib}
@misc{Elli18,
 author = "Ellis, Ferris",
 title = {{Strength in Numbers: Unums and the Quest for Reliable
 Arithmetics}},
 year = "2018",
 link = "\url{https://www.youtube.com/watch?v=nVNYjmj_qbY}",
 abstract =
 "In the land of computer arithmetic, a tyrant has ruled since its very
 beginning: the floating point number. Under its rule we have all
 endured countless hardships and cruelties. To this very day the
 floating point number still denies 0.1 + 0.2 == 0.3 and returns
 insidious infinities to software developers everywhere. But a new hero
 has entered fray: the universal number (unum). Can it topple the float
 number system and its century long reign?

 This talk will introduce unums, explain their benefits over floating
 point numbers, and examine multiple real world examples comparing the
 two. For those not familiar with floating point numbers and their
 pitfalls, this talk also includes a primer on the topic. Code examples
 are in Rust, though strong knowledge of the language is not needed."
}

\end{chunk}

\index{Weirich, Stephanie}
\begin{chunk}{axiom.bib}
@misc{Weir18,
 author = "Weirich, Stephanie",
 title = {{Dependent Types in Haskell}},
 year = "2018",
 link = "\url{https://www.youtube.com/watch?v=wNa3MMbhwS4}",
 abstract =
 "What has dependent type theory done for Haskell? Over the past ten
 years, the Glasgow Haskell compiler (GHC) has adopted many type system
 features inspired by dependent type theory. In this talk, I will
 discuss the influence of dependent types on the design of GHC and on
 the practice of Haskell programmers. In particular, I will walk
 through an extended example and use it to analyze what it means to
 program with with dependent types in Haskell. Throughout, I will will
 discuss what we have learned from this experiment in language design:
 what works now, what doesn't work yet, and what surprised us along
 the way."
}

\end{chunk}

\index{Christiansen, David Thrane}
\begin{chunk}{axiom.bib}
@misc{Chri18a,
 author = "Christiansen, David Thrane",
 title = {{Coding for Types: The Universe Pattern in Idris}},
 year = "2018",
 link = "\url{https://www.youtube.com/watch?v=AWeT_G04a0A}"
}

\end{chunk}

\index{Altenkirch, Thorsten}
\begin{chunk}{axiom.bib}
@misc{Alte18,
 author = "Altenkirch, Thorsten",
 title = {{Naive Type Theory}},
 year = "2018",
 link = "\url{https://www.youtube.com/watch?v=bNG53SA4n48p}"
}

\end{chunk}

\index{Hemann, Jason}
\index{Friedman, Daniel}
\begin{chunk}{axiom.bib}
@misc{Hema14,
 author = "Hemann, Jason and Friedman, Daniel",
 title = {{Write the Other Half of Your Program}},
 year = "2014",
 link = "\url{https://www.youtube.com/watch?v=RG9fBbQrVOM}"
}

\end{chunk}

\index{McKenna, Brian}
\begin{chunk}{axiom.bib}
@misc{Mcke14,
 author = "McKenna, Brian",
 title = {{Idris: Practical Dependent Types with Practical Examples}},
 year = "2014",
 link = "\url{https://www.youtube.com/watch?v=4i7KrG1Afbk}"
 abstract =
 "Dependent types turn types into a firstclass language construct and
 allows types to be predicated upon values. Allowing types to be
 controlled by ordinary values allows us to prove many more properties
 about our programs and enables some interesting forms of
 metaprogramming. We can do interesting things like write a typesafe
 printf or verify algebraic laws of data structures  but while
 dependent types are quickly gaining in popularity, it can be tricky to
 find examples which are both useful and introductory.

 This talk will demonstrate some practical dependent typing examples
 (and not sized vectors) using Idris, a language designed for writing
 executable programs, rather than just proving theorems."
}

\end{chunk}

\index{Brady, Edwin}
\begin{chunk}{axiom.bib}
@misc{Brad17,
 author = "Brady, Edwin",
 title = {{Dependent Types in the Idris Programming Language 1}},
 year = "2017",
 link = "\url{https://www.cs.uoregon.edu/research/summerschool/summer17/topics.php}"
}

\end{chunk}

\index{Brady, Edwin C.}
\begin{chunk}{axiom.bib}
@phdthesis{Brad05,
 author = "Brady, Edwin C.",
 title = {{Practical Implementation of a Dependently Typed Functional
 Programming Language}},
 school = "University of Durham",
 year = "2005",
 link = "\url{https://eb.host.cs.standrews.ac.uk/writings/thesis.pdf}",
 abstract =
 "Types express a program’s meaning, and checking types ensures that a
 program has the intended meaning. In a dependently typed programming
 language types are predicated on values, leading to the possibility of
 expressing invariants of a program’s behaviour in its type. Dependent
 types allow us to give more detailed meanings to programs, and hence
 be more confident of their correctness.

 This thesis considers the practical implementation of a dependently
 typed programming language, using the Epigram notation defined by
 McBride and McKinna. Epigram is a high level notation for dependently
 typed functional programming elaborating to a core type theory based
 on Luo’s UTT, using Dybjer’s inductive families and elimination rules
 to implement pattern matching. This gives us a rich framework for
 reasoning about programs. However, a na ̈ıve implementation introduces
 several runtime overheads since the type sys tem blurs the
 distinction between types and values; these overheads include the
 duplication of values, and the storage of redundant information and
 explicit proofs.

 A practical implementation of any programming language should be as
 efficient as pos sible; in this thesis we see how the apparent
 efficiency problems of dependently typed pro gramming can be overcome
 and that in many cases the richer type information allows us to apply
 optimisations which are not directly available in traditional
 languages. I introduce three storage optimisations on inductive
 families; forcing, detagging and collapsing. I further introduce a
 compilation scheme from the core type theory to Gmachine code,
 including a pattern matching compiler for elimination rules and a
 compilation scheme for efficient run time implementation of Peano’s
 natural numbers. We also see some low level optimisations for removal
 of identity functions, unused arguments and impossible case branches.
 As a result, we see that a dependent type theory is an effective base
 on which to build a feasible programming language.",
 paper = "Brad05.pdf"
}

\end{chunk}

\index{Soare, Robert I.}
\begin{chunk}{axiom.bib}
@inbook{Soar99,
 author = "Soare, Robert I.",
 title = {{The History and Concept of Computability}},
 booktitle = "Handbook of Computability Theory",
 publisher = "NorthHolland",
 link = "\url{http://www.people.cs.uchicago.edu/~soare/History/handbook.pdf}",
 year = "1999",
 paper = "Soar99.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Post, Emil L.}
\begin{chunk}{axiom.bib}
@article{Post44,
 author = "Post, Emil L.",
 title = {{Recursively Enumerable Sets of Postive Integers and Their
 Decision Problems}},
 journal = "Bull. Amer. Math Soc.",
 volume = "50",
 pages = "284316",
 year = "1944",
 paper = "Post44.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Soare, Robert I.}
\begin{chunk}{axiom.bib}
@inprocedings{Soar07,
 author = "Soare, Robert I.",
 title = {{Computability and Incomputability}},
 booktitle = "Proc. Third Conf. on Computability in Europe}},
 year = "2007",
 publisher = "SpringerVerlog",
 isbn = "139783540730002",
 link = "\url{http://www.people.cs.uchicago.edu/~soare/History/siena.pdf}",
 abstract =
 "The conventional wisdom presented in most computability books and
 historical papers is that there were several researchers in the early
 1930’s working on various precise definitions and demonstrations of a
 function specified by a finite procedure and that they should all
 share approximately equal credit. This is incorrect. It was Turing
 alone who achieved the characterization, in the opinion of G ̈odel. We
 also explore Turing’s oracle machine and its analogous properties in
 analysis.",
 paper = "Soar07.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Soare, Robert I.}
\begin{chunk}{axiom.bib}
@article{Soar96,
 author = "Soare, Robert I.",
 title = {{Computability and Recursion}},
 journal = "Bulletin of Symbolic Logic",
 volume = "2",
 year = "1996",
 pages = "284321",
 link = "\url{http://www.people.cs.uchicago.edu/~soare/History/compute.pdf}",
 abstract =
 "We consider the informal concept of 'computability' or 'effective
 calculability' and two of the formalisms commonly used to define
 it, '(Turing) computability' and '(general) recursiveness'. We
 consider their origin, exact technical definition, concepts,
 history, general English meanings, how they became fixed in their
 present roles, how they were first and are now used, their impact
 on nonspecialists, how their use will affect the future content of
 the subject of computability theory, and its connection to other
 related areas.

 After a careful historical and conceptual analysis of
 computability and recursion we make several recommentations in
 section 7 about preserving the {\sl intensional} differences
 between the concepts of 'computability' and
 'recursion'. Specificually we recommend that: the term 'recursive'
 should no longer carry the additional meaning of 'computable' or
 'decidable', functions defined using Turing machines, register
 machines, or their variants should be called 'computable' rather
 than 'recursive', we should distinguish the intensional difference
 between Church's Thesis and Turing's Thesis, and use the latter
 particluarly in dealing with mechanistic questions; the name of
 the subject should be '{\sl Computability Theory}' or simply
 {\sl Computability} rather than 'Recursive Function Theory'",
 paper = "Soar96.pdf",
 keywords = "printed"
}

\end{chunk}

\index{van Heijenoort, Jean}
\begin{chunk}{axiom.bib}
@book{Heij67,
 author = "van Heijenoort, Jean",
 title = {{From Frege to Godel}},
 publisher = "Harvard University Press",
 isbn = "0674324498",
 year = "1967"
}

\end{chunk}

\index{Boolos, George S.}
\index{Burgess, John P.}
\index{Jeffrey, Richard C.}
\begin{chunk}{axiom.bib}
@book{Bool07,
 author = "Boolos, George S. and Burgess, John P. and Jeffrey, Richard C.",
 title = {{Computability and Logic}},
 publisher = "Cambridge University Press",
 year = "2007",
 isbn = "9780521701464"
}

\end{chunk}

\index{Copeland, B. Jack}
\begin{chunk}{axiom.bib}
@book{Cope04,
 author = "Copeland, B. Jack",
 title = {{The Essential Turing}},
 publisher = "Oxford University Press",
 year = "2004",
 isbn = "9780198250807"
}

\end{chunk}

\index{Barwise, Jon}
\index{Moss, Lawrence}
\begin{chunk}{axiom.bib}
@book{Barw96,
 author = "Barwise, Jon and Moss, Lawrence",
 title = {{Vicious Circles}}
 publisher = "CSLI Publications",
 year = "1996",
 isbn = "1575860082"

\end{chunk}

\index{Harper, Robert}
\begin{chunk}{axiom.bib}
@misc{Harp16,
 author = "Harper, Robert",
 title = {{Practical Foundations for Progamming Languages}},
 publisher = "Cambridge University Press",
 year = "2016",
 isbn = "9781107150300"
}

\end{chunk}

\index{Hilbert, David}
\index{CohnVossen, S.}
\begin{chunk}{axiom.bib}
@book{Hilb52,
 author = "Hilbert, David and CohnVossen, S.",
 title = {{Geometry and the Imagination}},
 publisher = "Chelsea Publishing Company",
 year = "1952"
}

\end{chunk}

\index{Heyting, A.}
\begin{chunk}{axiom.bib}
@book{Heyt56,
 author = "Heyting, A.",
 title = {{Intuitionism: An Introduction}},
 publisher = "North Holland",
 year = "1956"
}

\end{chunk}

\index{Jeffrey, Richard}
\begin{chunk}{axiom.bib}
@book{Jeff81,
 author = "Jeffrey, Richard",
 title = {{Formal Logic: Its Scope and Limits}},
 publisher = "McGrawHill",
 year = "1981",
 isbn = "0070323216"
}

\end{chunk}

\index{Cosmo, Roberto Di}
\begin{chunk}{axiom.bib}
@book{Cosm95,
 author = "Cosmo, Roberto Di",
 title = {{Isomorphisms of TYpes}},
 publisher = "Birkhauser",
 year = "1995",
 isbn = "081763763X"
}

\end{chunk}

\index{Winkler, Franz}
\begin{chunk}{axiom.bib}
@book{Wink84,
 author = "Winkler, Franz",
 title = {{The Church Rosser Property in Computer Algebra and Special
 Theorem Proving}},
 publisher = "Wien",
 year = "1984",
 isbn = "3853695841"
}

\end{chunk}

\index{Mills, Bruce}
\begin{chunk}{axiom.bib}
@book{Mill06,
 author = "Mills, Bruce",
 title = {{Theoretical Introduction to Programming}},
 publisher = "Springer",
 year = "2006",
 isbn = "1846280214"
}

\end{chunk}

\index{Platzer, Andre}
\begin{chunk}{axiom.bib}
@book{Plat18,
 author = "Platzer, Andre",
 title = {{Logical Foundations of CyberPhysical Systems}},
 publisher = "Springer",
 year = "2018",
 isbn = "9783319635873"
}

\end{chunk}

\index{Yoshida, Masaaki}
\begin{chunk}{axiom.bib}
@book{Yosh97,
 author = "Yoshida, Masaaki",
 title = {{Hypergeometric Functions, My Love}},
 publisher = "Springer",
 year = "1997",
 isbn = "9783322901682"
}

\end{chunk}

\index{Dijkstra, Edsger W.}
\index{Scholten, Carel S.}
\begin{chunk}{axiom.bib}
@book{Dijk90,
 author = "Dijkstra, Edsger W. and Scholten, Carel S.",
 title = {{Predicate Calculus and Program Semantics}},
 publisher = "Springer",
 year = "1990",
 isbn = "0387969578"
}

\end{chunk}

\index{Ehrig, Hartmut}
\index{Mahr, Bernd}
\begin{chunk}{axiom.bib}
@book{Ehri85a,
 author = "Ehrig, Hartmut and Mahr, Bernd",
 title = {{Fundamentals of Algebraic Specification 2: Module
 Specifications and Constraints}},
 publisher = "Springer Verlag",
 year = "1985",
 isbn = "0387517995"
}

\end{chunk}

\index{Boyer, Robert S.}
\index{Moore, J Strother}
\begin{chunk}{axiom.bib}
@book{Boye81a,
 author = "Boyer, Robert S. and Moore, J Strother",
 title = {{The Correctness Problem in Computer Science}},
 publisher = "Academic Press",
 year = "1981",
 isbn = "0121229203"
}

\end{chunk}

\index{Dijkstra, Edsger W.}
\index{Feijen, W.H.J}
\begin{chunk}{axiom.bib}
@book{Dijk88,
 author = "Dijkstra, Edsger W. and Feijen, W.H.J",
 title = {{A Method of Programming}},
 publisher = "Addison Wesley",
 year = "1988",
 isbn = "0201175363"
}

\end{chunk}

\index{Kiczales, Gregor}
\index{des Rivieres, Jim}
\index{Bobrow, Daniel G.}
\begin{chunk}{axiom.bib}
@book{Kicz91,
 author = "Kiczales, Gregor and des Rivieres, Jim and Bobrow, Daniel G.",
 title = {{The Art of the Metaobject Protocol}},
 publisher = "MIT Press",
 year = "1991",
 isbn = "0262610744"
}

\end{chunk}

\index{Herken, Rolf}
\begin{chunk}{axiom.bib}
@book{Herk88,
 author = "Herken, Rolf",
 title = {{The Universal Turing Machine}},
 publisher = "Oxford Science Publications",
 year = "1988",
 isbn = "0198537743"
}

\end{chunk}

\index{Kripke, Saul A.}
\begin{chunk}{axiom.bib}
@book{Krip80,
 author = "Kripke, Saul A.",
 title = {{Naming and Necessity}},
 publisher = "Harvard University Press",
 year = "1980",
 isbn = "0674598466"
}

\end{chunk}

\index{Kleene, S.C.}
\index{Vesley, R.E.}
\begin{chunk}{axiom.bib}
@book{Klee65,
 author = "Kleene, S.C. and Vesley, R.E.",
 title = {{The Foundations of Intuitionistic Mathematics}},
 publisher = "NorthHolland Publishing Company",
 year = "1965"
}

\end{chunk}

\index{Dijkstra, Edsger W.}
\begin{chunk}{axiom.bib}
@misc{Dijk70,
 author = "Dijkstra, Edsger W.",
 title = {{Concern for Correctness as a Guiding Principle for Program
 Composition}},
 year = "1970",
 comment = "EWD288"
 link = "\url{https://www.cs.utexas.edu/users/EWD/transcriptions/EWD02xx/EWD288.html}",
 paper = "Dijk70.html",
 keywords = "printed"
}

\end{chunk}

\index{Hahnle, Reiner}
\index{Wallenburg, Angela}
\begin{chunk}{axiom.bib}
@misc{Hahn04,
 author = "Hahnle, Reiner and Wallenburg, Angela",
 title = {{Using a Software Testing Technique to Improve Theorem Proving}},
 year = "2004",
 link = "\url{http://www.cse.chalmers.se/~angelaw/papers/lic.pdf}",
 comment = "paper follows thesis in file",
 abstract =
 "Most efforts to combine formal methods and software testing go in the
 direction of exploiting formal methods to solve testing problems, most
 commonly test case generation. Here we take the reverse viewpoint and
 show how the technique of partition testing can be used to improve a
 formal proof technique (induction for correctness of loops). We first
 compute a partition of the domain of the induction variable, based on
 the branch predicates in the program code of the loop we wish to
 prove. Based on this partition we derive mechanically a partitioned
 induction rule, which is (hopefully) easier to use than the standard
 induction rule. In particular, with an induction rule that is
 tailored to the program to be verified, less user interaction can be
 expected to be required in the proof. We demonstrate with a number of
 examples the effectiveness of our method.".
 paper = "Wall04.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Wallenburg, Angela}
\begin{chunk}{axiom.bib}
@phdthesis{Wall04,
 author = "Wallenburg, Angela",
 title = {{Inductive Rules for Proving Correctness of Imperative Programs}},
 school = "Goteborg University",
 link = "\url{http://www.cse.chalmers.se/~angelaw/papers/lic.pdf}",
 abstract =
 "This thesis is aimed at simplifying the userinteraction in
 semiinteractive theorem proving for imperative programs. More
 specically , we describe the creation of customised induction rules
 that are tailormade for the specific program to verify and thus make
 the resulting proof simpler. The concern is in user interaction,
 rather than in proof strength. To achiev e this, two different
 verication techniques are used.

 In the first approach, we develop an idea where a software testing
 technique, partition analysis, is used to compute a partition of the
 domain of the induction variable, based on the branch predicates in
 the program we wish to prove correct. Based on this partition we
 derive mechanically a partitioned induction rule, which then inherits
 the divideandconquer style of partition analysis, and (hopefully) is
 easier to use than the standard (Peano) induction rule.

 The second part of the thesis continues with a more thorough
 development of the method. Here the connection to software testing is
 completely removed and the focus is on inductive theorem proving only.
 This time, we make use of failed proof attempts in a theorem prover
 to gain information about the problem structure and create the
 partition. Then, based on the partition we create an induction rule,
 in destructor style, that is customised to make the proving of the
 loop simpler.

 With the customised induction rules, in comparison to standard (Peano)
 induction or Noetherian induction, the required user interaction is
 moved to an earlier point in the proof which also becomes more
 modularised. Moreover, by using destructor style induction we
 circumvent the problem of creating inverses of functions. The
 soundness of the customised induction rules created by the method is
 shown. Furthermore, the machinery of the theorem prover (KeY) is used
 to make the method automatic. The induction rules are developed to
 prove the total correctness of loops in an objectoriented language
 and we concentrate on integers.",
 paper = "Wall04.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Olsson, Ola}
\index{Wallenburg, Angela}
\begin{chunk}{axiom.bib}
@msic{Olss04,
 author = "Olsson, Ola and Wallenburg, Angela",
 title = {{Automatic Generation of Customised Induction Rules for
 Proving Correctness of Imperative Programs}},
 year = "2004",
 comment = "paper follows thesis in file",
 link = "\url{http://www.cse.chalmers.se/~angelaw/papers/lic.pdf}",
 abstract =
 "In this paper we develop a method for automatic construction of
 customised induction rules for use in a semiinteractiv e theorem
 prover. The induction rules are developed to prove the total
 correctness of loops in an objectoriented language. We concentrate
 on integers. First we compute a partition of the domain of the
 induction variable. Our method makes use of failed proof attempts in
 the theorem prover to gain information about the problem structure and
 create the partition. Then, based on this partition we create an
 induction rule, in destructor style, that is customised to make the
 proving of the loop simpler. Our concern is in user interaction,
 rather than in proof strength.

 Using the customised induction rules, some separation of proof of
 control flow and data correctness is achieved, and we find that in
 comparison to standard (Peano) induction or Noetherian induction,
 simpler user interaction can be expected. Furthermore, by using
 destructor style induction we circumvent the problem of creating
 inverses of functions. We show the soundness of the customised
 induction rules created by the method. Furthermore, we use the
 machinery of the theorem prover (KeY) to make the method automatic.
 Several interesting areas are also identified that could open up for
 a larger range of loops the method can handle, as well as pointing
 towards full automation of these cases.",
 paper = "Wall04.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Olsson, Ola}
\index{Wallenburg, Angela}
\begin{chunk}{axiom.bib}
@msic{Olss05,
 author = "Olsson, Ola and Wallenburg, Angela",
 title = {{Customised Induction Rules for Proving Correctness of
 Imperative Programs}},
 year = "2005",
 link = "\url{http://www.cse.chalmers.se/~angelaw/papers/olswbg05.pdf}",
 abstract =
 "In this paper we develop a method for automatic construction of
 customised induction rules for use in a semiinteractiv e theorem
 prover. The induction rules are developed to prove the total
 correctness of loops in an objectoriented language. We concentrate
 on integers. First we compute a partition of the domain of the
 induction variable. Our method makes use of failed proof attempts in
 the theorem prover to gain information about the problem structure and
 create the partition. Then, based on this partition we create an
 induction rule, in destructor style, that is customised to make the
 proving of the loop simpler. Our concern is in user interaction,
 rather than in proof strength.

 Using the customised induction rules, some separation of proof of
 control flow and data correctness is achieved, and we find that in
 comparison to standard (Peano) induction or Noetherian induction,
 simpler user interaction can be expected. Furthermore, by using
 destructor style induction we circumvent the problem of creating
 inverses of functions. We show the soundness of the customised
 induction rules created by the method. Furthermore, we use the
 machinery of the theorem prover (KeY) to make the method automatic.
 Several interesting areas are also identified that could open up for
 a larger range of loops the method can handle, as well as pointing
 towards full automation of these cases.",
 paper = "Olss05.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Wallenburg, Angela}
\begin{chunk}{axiom.bib}
@misc{Wall09,
 author = "Wallenburg, Angela",
 title = {{Generalisation of Inductive Formulae based on Proving by
 Symbolic Execution}},
 year = "2009",
 link = "\url{http://www.cse.chalmers.se/~angelaw/papers/awWING2009.pdf}",
 abstract =
 "Induction is a powerful method that can be used to prove the total
 correctness of program loops. Unfortunately the induction proving
 process in an interactive theorem prover is often very cumbersome. In
 particular it can be difficult to find the right induction formula.
 We describe a method for generalising induction formulae by analysing
 a symbolic proof attempt in a semiinteractive firstorder theorem
 prover. Based on the proof attempt we introduce universally
 quantified variables, metavariables and sets of constraints on these.
 The constraints describe the conditions for a successful proof. By
 the help of examples, we outline some classes of problems and their
 associated constraint solutions, and possible ways to automate the
 constraint solving.",
 paper = "Wall09.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Moy, Yannick}
\index{Wallenburg, Angela}
\begin{chunk}{axiom.bib}
@misc{Moyx10,
 author = "Moy, Yannick and Wallenburg, Angela",
 title = {{Tokeneer: Beyond Formal Program Verification}},
 year = "2010",
 link = "\url{http://www.cse.chalmers.se/~angelaw/papers/ERTS2010.pdf}",
 abstract =
 "Tokeneer is a smallsized (10 kloc) security system which was
 formally developed and verified by Praxis at the request of NSA, using
 SPARK technology. Since its opensource release in 2008, only two
 problems were found, one by static analysis, one by code review. In
 this paper, we report on experiments where we systematically applied
 various static analysis tools (compiler, bugfinder, proof tools)
 and focused code reviews to all of the SPARK code (formally verified)
 and supporting Ada code (not formally verified) of the Tokeneer
 Project. We found 20 new problems overall, half of which are defects
 that could lead to a system failure should the system be used in its
 current state. Only two defects were found in SPARK code, which
 confirms the benefits of applying formal verification to reach
 higher levels of assurance. In order to leverage these benefits to
 code that is was not formally verified from the start, we propose to
 associate static analyses and dynamic analyses around a common
 expression of properties and constraints. This is the goal of starting
 project HiLite, which involves AdaCore and Altran Praxis together
 with several industrial users and research labs.",
 paper = "Moyx10.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Baker, Henry G.}
\begin{chunk}{axiom.bib}
@article{Bake92,
 author = "Baker, Henry G.",
 title = {{A Decision Procedure for Common Lisp's SUBTYPEP Predicate}},
 journal = "Lisp and Symbolic Computation",
 volume = "5",
 number = "3",
 year = "1992",
 pages = "157190"
 link = "\url{}",
 abstract =
 "Common Lisp [CL84,CL90] includes a dynamic datatype system of
 moderate complexity, as well as predicates for checking the types of
 language objects. Additionally, an interesting predicate of two 'type
 specifiers'— SUBTYPEP —is included in the language. This subtypep
 predicate provides a mechanism with which to query the Common Lisp
 type system regarding containment relations among the various builtin
 and userdefined types. While subtypep is rarely needed by an
 applications programmer, the efficiency of a Common Lisp
 implementation can depend critically upon the quality of its subtypep
 predicate: the runtime system typically calls upon subtypep to decide
 what sort of representations to use when making arrays; the compiler
 calls upon subtypep to interpret user declarations, on which efficient
 data representation and code generation decisions are based.

 As might be expected due to the complexity of the Common Lisp type
 system, there may be type containment questions which cannot be
 decided. In these cases subtypep is expected to return 'can't
 determine', in order to avoid giving an incorrect
 answer. Unfortunately, most Common Lisp implementations have abused
 this license by answering 'can't determine' in all but the most
 trivial cases. In particular, most Common Lisp implementations of
 SUBTYPEP fail on the basic axioms of the Common Lisp type system
 itself [CL84,p.33]. This situation is particularly embarrassing for
 Lisp—the premier 'symbol processing language'—in which the
 implementation of complex symbolic logical operations should be
 relatively easy. Since subtypep was presumably included in Common Lisp
 to answer the hard cases of type containment, this 'lazy evaluation'
 limits the usefulness of an important language feature.

 This paper shows how those type containment relations of Common Lisp
 which can be decided at all, can be decided simply and quickly by a
 decision procedure which can dramatically reduce the number of
 occurrences of the 'can't determine' answer from subtypep . This
 decision procedure does not require the conversion of a type specifier
 expression to conjunctive or disjunctive normal form, and therefore
 does not incur the exponential explosion in space and time that such a
 conversion would entail.

 The lattice mechanism described here for deciding subtypep is also
 ideal for performing type inference [Baker90]; the particular
 implementation developed here, however, is specific to the type system
 of Common Lisp [Beer88].",
 paper = "Bake92.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Haynes, Christopher T.}
\index{Friedman, Daniel P.}
\begin{chunk}{axiom.bib}
@techreport{Hayn87,
 author = "Haynes, Christopher T. and Friedman, Daniel P."
 title = {{Embedding Continuations in Procedural Objects}},
 type = "technical report",
 institution = "Indiana University",
 number = "213",
 year = "1987",
 abstract =
 "Continuations, when available as firstclass objects, provide a
 general control abstraction in programming languages. They
 liberate the programmer from specific control structures,
 increasing programming language extensibility. Such continuations
 may be extended by embedding them in procedural objects. This
 technique is first used to restore a fluid environment when a
 continuation object is invoked. We then consider techniques for
 constraining the power of continuations in the interest of
 security and efficiency. Domain mechanisms, which create dynamic
 barriers for enclosing control, are implemented using
 fluids. Domains are then used to implement an unwindprotect
 facility in the presence of firstclass continuations. Finally, we
 present two mechanisms, windunwind and dynamicwind, that
 generalizes unwindprotect.",
 paper = "Hayn87.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Goldblatt, Robert}
\begin{chunk}{axiom.bib}
@book{Gold84,
 author = "Goldblatt, Robert",
 title = {{Topoi: The Categorical Analysis of Logic}},
 year = "1984",
 publisher = "Dover",
 isbn = "9780486450261"
}

\end{chunk}

\index{Clocksin, William F.}
\begin{chunk}{axiom.bib}
@book{Cloc91,
 author = "Clocksin, William F.",
 title = {{Clause and Effect}},
 year = "1991",
 publiser = "Springer",
 isbn = "3540629718"
}

\end{chunk}

\index{Brooks, Rodney A.}
\index{Gabriel, Richard P.}
\index{Steele, Guy L.}
\begin{chunk}{axiom.bib}
@article{Broo82a,
 author = "Brooks, Rodney A. and Gabriel, Richard P. and Steele, Guy L.",
 title = {{An Optimizing Compiler for Lexically Scoped LISP}},
 journal = "ACM SIGPLAN Notices",
 volume = "17",
 number = "6",
 year = "1982",
 pages = "261275',
 abstract =
 "We are developing an optimizing compiler for a dialect of the
 LISP language. The current target architecture is the S1, a
 multiprocessing supercomputer designed at Lawrence Livermore
 National Laboraty. While LISP is usually thought of as a language
 primarily for symbolic processing and list manipulation, this
 compiler is also intended to compete with the S1 PASCAL and
 FORTRAN compilers for quality of compiled numerical code. The S1
 is designed for extremely highspeed signal processing as well as
 for symbolic computation; it provides primitive operations on
 vectors of floatingpoint and complex numbers. The LISP compiler
 is designed to exploit the architecture heavily.",

 The compiler is structurally and conceptually similar to the
 BLISS11 compiler and the compilers produced by PQCC. In
 particular, the TNBIND technique has been borrowed and extended.

 Particularly interesting properties of the compiler are:
 \begin{itemize}
 \item Extensive use of sourcetosource transformation
 \item Use of an intermediate form that is expressionoriented
 rather than statement oriented
 \item Exploitation of tailrecursive function calls to represent
 coplex control structures
 \item Efficient compilation of code that can manipulate procedural
 object that require heapallocated environments
 \item Smooth runtime interfacing between the ``numerical world''
 and ``LISP pointer world'', including automatic stack allocation
 of objects that ordinarlly must be heapallocated
 \end{itemize}

 Each of these techniques has been used before, but we believe
 their synthesis to be original and unique.

 The compiler is tabledriven to a great extent, more so than
 BLISS11 but less so than a PQCC compiler. We expect to be able to
 redirect the compiler to other target architectures such as the
 VAX or PDP10 with relatively little effort.",
 paper = "Broo82a.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Brooks, Rodney A.}
\index{Gabriel, Richard P.}
\index{Steele, Guy L.}
\begin{chunk}{axiom.bib}
@inproceedings{Broo82,
 author = "Brooks, Rodney A. and Gabriel, Richard P. and Steele, Guy L.",
 title = {{S1 Common Lisp Implementation}},
 booktitle = "Proc ACM Symp. on Lisp and functional programming",
 publisher = "ACM",
 year = "1982",
 isbn = "0897910826",
 abstract =
 "We are developing a Lisp implementation for the Lawrence
 Livermore National Laboratory S1 Mark IIA computer. The dialect
 of Lisp is an extension of COMMON Lisp [Steele;1982], a descendant
 of Maclisp [Moon;1974] and Lisp Machine Lisp [Weinreb;1981].",
 paper = "Broo82.pdf",
 keywords = "printed"

}

\end{chunk}

\index{Beer, Randall D.}
\begin{chunk}{axiom.bib}
@article{Beer87,
 author = "Beer, Randall D.",
 title = {{Preliminary Report on A Practical Type Inference System
 for Common Lisp}},
 journal = "Lisp Pointers",
 volume = "1",
 number = "2",
 year = "1987",
 pages = "511",
 paper = "Beer87.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Brooks, Rodney A.}
\index{Gabriel, Richard P.}
\begin{chunk}{axiom.bib}
@inproceedings{Broo84,
 author = "Brooks, Rodney A. and Gabriel, Richard P.",
 title = {{A Critique of Common Lisp}},
 booktitle = "Symposium on Lisp and Functional Programming",
 pages = "18",
 year = "1984",
 abstract =
 "A major goal of the COMMON LISP committee was to define a Lisp
 language with sufficient power and generality that people would be
 happy to stay within its confines and thus write inherently
 transportable code. We argue that the resulting language
 definition is too large for many shortterm and mediumterm
 potential applications. In addition many parts of COMMON LISP
 cannot be implemented very efficiently on stock hardware. We
 further argue that the very generality of the design with its
 different efficieny profiles on different architectures works
 agains the goal of transportability.",
 paper = "Broo84.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Cousot, Patrick}
\index{Cousot, Radhia}
\begin{chunk}{axiom.bib
@inproceedings{Cous77,
 author = "Cousot, Patrick and Cousot, Radhia",
 title = {{Abstract Interpretation: A Unified Lattice Model for
 Static Analysis of Programs by Construction or
 Approximation of Fixpoints}},
 booktitle = "Symp. on Principles of Programming Languages",
 pages = "238252",
 year = "1977",
 paper = "Cous77.pdf",
 keywords = "printed"
}

\end{chunk

\index{Rees, Jonathan}
\begin{chunk}{axiom.bib}
@article{Rees82,
 author = "Rees, Jonathan",
 title = {{T: A Dialect of Lisp or, LAMBDA: The Ultimate Software Tool}},
 year = "1982",
 abstract =
 "The T proejct is an experiment in language design and
 implementation. Its purpose is to test the thesis developed by
 Steele and Sussman in their series of papers about the Scheme
 language: that Scheme may be used as the basis for a practical
 programming language of exceptional expressive power, and that
 implementations of Scheme could perform better than other Lisp
 systems, and competitively with implementations of programming
 languages, such as C and Bliss, which are usually considered to be
 inherently more efficient than Lisp on conventaional machine
 architectures. We are developing a portable implementation of T,
 currently targeted for the VAX under the Unix and VMS operating
 systems and for the Apollo, a MC68000based workstation.",
 paper = "Rees82.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Shivers, Olin}
\begin{chunk}{axiom.bib}
@techreport{Shiv90,
 author = "Shivers, Olin",
 title = {{DataFlow Analysis and Type Recovery in Scheme}}
, year = "1990",
 type = "technical report",
 institution = "Carnegie Mellon University",
 number = "CMUCS90115",
 abstract =
 "The lack of explicit type information in Scheme prevents the
 application of many compiler optimizations. Implicit type
 information can oftentimes be recovered by analyzing the flow of
 control through primitive operations and conditionals. Such flow
 analysis, however, is difficult in the presence of higherorder
 functions. This paper presents a technique for recovering type
 information fro mScheme programs using flow analysis. The
 algorithm used for flow analysis is semantically based, and is
 novel in that it correctly handles the ``environment flow''
 problem. Several examples of a working implementation of the
 analysis are presented. The techniques are applicable not only to
 Scheme type recovery, but also to related languages, such as
 Common Lisp and Standard ML, and to related dataflow analyses,
 such as range analysis, future analysis, and copy propagation.",
 paper = "Shiv90.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Steenkiste, Peter}
\index{Hennessy, John}
\begin{chunk}{axiom.bib}
@article{Stee87,
 author = "Steenkiste, Peter and Hennessy, John",
 title = {{Tags and Type Checking in LISP: Hardware and Software Approaches}},
 journal = ACM SIGPLAN Notices",
 volume = "22",
 number = "10",
 pages = "5059",
 year = "1987",
 abstract =
 "One of the major factors that distinguishes LISP from many other
 languages (Pascal, C, Fortran, etc.) is the need for runtime type
 checking. Runtime type checking is implemented by adding to each
 data object a tag that encodes type information. Tags must be
 compared for type compatibility, removed when using the data, and
 inserted when new data items are created. This tag manipulation,
 together with other work related to dynamic type checking and
 generic operations, constitutes a significant component of the
 execution time of LISP programs. This has led both to the
 development of LISP machines that support tag checking in hardware
 and to the avoidance of type checking by users running on stock
 hardware. To understand the role and necessity of specialpurpose
 hardware for tag handling, we first measure the cost of type
 checking operations for a group of LISP programs. We then examine
 hardware and software implementations of tag operations and
 estimate the cost of tag handling with the different tag
 implementation schemes. The data shows that minimal levels of
 support provide most of the benefits, and that tag operations can
 be relatively inexpensive, even when no special hardware support
 is present.",
 paper = "Stee87.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Brooks, Rodney A.}
\index{Posner, David B.}
\index{McDonald, James L.}
\index{White, Jon L.}
\index{Benson, Eric},
\index{Gabriel, Richard P.}
\begin{chunk}{axiom.bib}
@inproceedings{Broo86,
 author = "Brooks, Rodney A. and Posner, David B. and McDonald, James L.
 and White, Jon L. and Benson, Eric} and Gabriel, Richard P.",
 title = {{Design of An Optimising, Dynamically Retargetable Compiler
 for Common Lisp}},
 booktitle = "Conf. on Lisp and Functional Programming",
 publisher = "ACM",
 pages = "6785",
 year = "1986",
 abstract =
 "We outline the components of a retargetable crosscompiler for
 the Common Lisp language. A description is given of a method for
 modeling the various hardware features in the compiler's database,
 and a breakdown is shown of the compiler itself into various
 machineindependent and machinedependent modules. A novel feature
 of this development is the dynamic nature of the retargeting:
 Databses for multiple hardware architectures are a standard part
 of the compiler, and the internal interfaces used by the compiler
 are such that the machinedependent modules may be instantly
 switched from one to another. Examples of generated code in
 several environments will be given to demonstrate the high quality
 of the output available, even under this very modular approach..",
 paper = "Broo86.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Kaplan, Marc A.}
\index{Ullman, Jeffrey D.}
\begin{chunk}{axiom.bib}
@article{Kapl80,
 author = "Kaplan, Marc A. and Ullman, Jeffrey D.",
 title = {{A Scheme for the Automatic Inference ov Variable Types}},
 journal = "J. ACM",
 volume = "27",
 number = "1",
 pages = "128.145",
 year = "1980",
 abstract =
 "In this paper an algorithm for the determination of runtime
 types in a programming language requiring no type declarations is
 presented. It is demonstrated that this algorithm is superior to
 other published algorithms in the sense that it produces stronger
 assertions about the set of possible types for variables than do
 other known algorithms. In fact this algorithm is shown to be the
 best possible algorithm from among all those that use the same set
 of primitive operators.",
 paper = "Kapl80.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Ma, KwanLiu}
\index{Kessler, Robert R.}
\begin{chunk}{axiom.bib}
@article{Maxx90,
 author = "Ma, KwanLiu and Kessler, Robert R.",
 title = {{TICL  A Type Inference System for Common Lisp}},
 journal = "Software  Practice and Experience",
 volume = "20",
 number = "6",
 pages = "593623",
 year = "1990",
 abstract =
 "Most current Common Lisp compilers generate more efficient code
 when supplied with data type information. However, in keeping with
 standard Lisp programming style, most programmers are reluctant to
 provide type information; they simply allow the runtime type
 system to managed the data types accordingly. To fill this gap, we
 have designed and implemented a type inference system for Common
 Lisp (TICL). TICL takes a Lisp program that has been annotated
 with a few type declarations, adds as many declarations as
 possible, and produces a type declared program. The compiler can
 then use this information to generate more efficient
 code. Measurements indicate that a 20 per cent speed improvement
 can generally be achieved.",
 paper = "Maxx90.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Henglein, Friedrich}
\begin{chunk}{axiom.bib}
@phdthesis{Heng89,
 author = "Henglein, Friedrich",
 title = {{Polymorphic Type Inference and SemiUnification}},
 school = "Rutgers",
 year = "1989",
 link =
 "\url{http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.388.1275&rep=rep1&type=pdf}",
 abstract =
 "In the last ten years declarationfree programming languages with
 a polymorphic typing discipline (ML, B) have been developed to
 approximate the flexibility and conciseness of dynamically typed
 languages (LISP, SETL) while retaining the safety and execution
 efficiency of conventional statically typed languages (Algol68,
 Pascal). These polymorphic languages can be type checked at
 compile time, yet allow functions whose arguments range over a
 variety of types.

 We investigate several polymorphic type systems, the most powerful
 of which, termed MilnerMycroft Calculus, extends the socalled
 letpolymorphism found in, e.g. ML with a polymorphic typing rule
 for recursive defiitions. We show that semiunification, the
 problem of solving inequalities over firstorder terms,
 characterizes type checking in the MilnerMycroft Calculus to
 polynomial time, even in the restricted case were nested
 definitions are disallowed. This permits us to extend some
 infeasibility results for related combinatorial problems to type
 inference and to correct several claims and statements in the
 literature.

 We prove the existence of unique most general solutions of term
 inequalities, called most general semiunifiers, and present an
 algorithm for computing them that terminates for all known inputs
 due to a novel ``extended occurs check''. We conjecture this
 algorithm to be uniformly terminating even though, at present,
 general semiunification is not known to be decidable. We prove
 termination of our algorithm for a restricted case of
 semiunification that is of independent interest.

 Finally, we offer an explanation for the apparent practicality of
 polymorphic type inference in the face of theoretical
 intractability results.",
 paper = "Heng89.pdf",
}

\end{chunk}

\index{Nimmer, Jeremy W.}
\index{Ernst, Michael D.}
\begin{chunk}{axiom.bib}
@inproceedings{Nimm02,
 author = "Nimmer, Jeremy W. and Ernst, Michael D.",
 title = {{Automatic Generation of Program Specifications}},
 booktitle = "Symp. on Software Testing and Analysis",
 publisher = "ACM Press",
 year = "2002",
 abstract =
 "Producing specifications by dynamic (runtime) analysis of program
 executions is potentially unsound, because the analyzed executions may
 not fully characterize all possible executions of the program. In
 practice, how accurate are the results of a dynamic analysis? This
 paper describes the results of an investigation into this question,
 determining how much specifications generalized from program runs
 must be changed in order to be verified by a static checker.
 Surprisingly, small test suites captured nearly all program behavior
 required by a specific type of static checking; the static checker
 guaranteed that the implementations satisfy the generated specifications
 , and ensured the absence of runtime exceptions. Measured
 against this verification task, the generated specifications scored
 over 90 percent on precision, a measure of soundness, and on recall, a
 measure of completeness.

 This is a positive result for testing, because it suggests that
 dynamic analyses can capture all semantic information of interest for
 certain applications. The experimental results demonstrate that a
 specific technique, dynamic invariant detection, is effective at
 generating consistent, sufficient specifications for use by a static
 checker. Finally, the research shows that combining static and
 dynamic analyses over program specifications has benefits for users of
 each technique, guaranteeing soundness of the dynamic analysis and
 lessening the annotation burden for users of the static analysis.",
 paper = "Nimm02.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Hantler, Sidney L.}
\index{King, James C.}
\begin{chunk}{axiom.bib}
@article{Hant76,
 author = "Hantler, Sidney L. King, James C.",
 title = {{An Introduction to Proving the Correctness of Programs}},
 journal = "ACM Computing Surveys",
 volume = "8",
 number = "3",
 pages = "331353",
 year = "1976",
 abstract =
 "This paper explains, in an introductory fashion, the method of
 specifying the correct behavior of a program by the use of
 input/output assertions and describes one method for showing that the
 program is correct with respect to those assertions. An initial
 assertion characterizes conditions expected to be true upon entry to
 the program and a final assertion characterizes conditions expected to
 be true upon exit from the program. When a program contains no
 branches, a technique known as symbolic execution can be used to show
 that the truth of the initial assertion upon entry guarantees the
 truth of the final assertion upon exit. More generally, for a program
 with branches one can define a symbolic execution tree. If there is
 an upper bound on the number of times each loop in such a program may
 be executed, a proof of correctness can be given by a simple traversal
 of the (finite) symbolic execution tree.

 However, for most programs, no fixed bound on the number of times each
 loop is executed exists and the corresponding symbolic execution trees
 are infinite. In order to prove the correctness of such programs, a
 more general assertion structure must be provided. The symbolic
 execution tree of such programs must be traversed inductively rather
 than explicitly. This leads naturally to the use of additional
 assertions which are called ``inductive assertions.''",
 paper = "Hant76.pdf",
 keywords = "printed",
}

\end{chunk}

\index{Wirth, Niklaus}
\begin{chunk}{axiom.bib}
@misc{Wirt15,
 author = "Wirth, Niklaus",
 title = {{The Design of a RISC Architecture and its Implementation
 with an FPGA}},
 link =
 "\url{https://www.inf.ethz.ch/personal/wirth/FPGArelatedWork/RISC.pdf}",
 year = "2015",
 paper = "Wirt15.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Remy, Didier}
\begin{chunk}{axiom.bib}
@techreport{Remy92,
 author = "Remy, Didier",
 title = {{Extensions to ML type system with a sorted equation theory
 on types}},
 type = "research report",
 institution = "INRIA",
 number = "RR1766",
 year = "1992",
 abstract =
 "We extend the ML language by allowing a sorted regular equational
 theory on types for which unification is decidable and unitary. We
 prove that the extension keeps principlal typings and subject
 reduction. A new set of typing rules is proposed so that type
 generalization is simpler and more efficient. We consider typing
 problems as general unification problems, which we solve with a
 formalism of unificands. Unificands naturally deal with sharing
 between types and lead to a more efficient type inference
 algorithm. The use of unificands also simplifies the proof of
 correctness of the algorithm by splitting it into more elementary
 steps.",
 paper = "Remy92.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Atkey, Robert}
\begin{chunk}{axiom.bib}
@inproceedings{Atke18,
 author = "Atkey, Robert",
 title = {{Syntax and Semantics of Quantitative Type Theory}},
 booktitle = "LICS",
 year = "2018",
 publisher = "ACM",
 isbn = "9781450355834",
 abstract =
 "We present Quantitative Type Theory, a Type Theory that records
 usage information for each variable in a judgement, based on a
 previous system by McBride. The usage information is used to give
 a realizability semantics using a variant of Linear Combinatory
 Algebras, refining the usual realizability semantics of Type
 Theory by accurately tracking resource behaviour. We define the
 semantics in terms of Quantitative Categories with Families, a
 novel extension of Categories with Families for modelling resource
 sensitive type theories.",
 paper = "Atke18.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Gabriel, Richard P.}
\begin{chunk}{axiom.bib}
@misc{Gabr10,
 author = "Gabriel, Richard P.",
 title = {{A Reivew of The Art of the Metaobject Protocol}},
 year = "2010",
 paper = "Gabr10.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Church, Alonzo}
\begin{chunk}{axiom.bib}
@book{Chur41,
 author = "Church, Alonzo",
 title = {{The Calculi of Lambda Conversion}},
 year = "1941",
 publisher = "Princeton University Press",
 paper = "Alon41*.pdf"
}

\end{chunk}

\index{Chow, Timothy Y.}
\begin{chunk}{axiom.bib}
@misc{Chow18,
 author = "Chow, Timothy Y.",
 title = {{The Consistency of Arithmetic}},
 year = "2018",
 link = "\url{http://timothychow.net/consistent.pdf}",
 paper = "Chow18.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Torlak, Emina}
\begin{chunk}{axiom.bib}
@misc{Torl17,
 author = "Torlak, Emina",
 title = {{Synthesis and Verifcation for All}},
 link = "\url{https://www.youtube.com/watch?v=KpDyuMIb_E0}",
 year = "2017"
}

\end{chunk}

\index{Burstall, R.M.}
\begin{chunk}{axiom.bib}
@article{Burs69,
 author = "Burstall, R.M.",
 title = {{Proving Properties of Programs by Structural Induction}},
 journal = "Computer Journal",
 volume = "12",
 number = "1",
 pages = "4148",
 link = "\url{http://www.cse.chalmers.se/edu/year/2010/course/DAT140_Types/Burstall.pdf}",
 year = "1969",
 abstract =
 "This paper discusses the technique of structural induction for
 proving theorems about programs. This technique is closely related to
 recursion induction but makes use of the inductive definition of the
 data structures handled by the programs. It treats programs with
 recursion but without assignments or jumps. Some syntactic extensions
 to Landin's functional programming language ISWIM are suggested which
 make it easier to program the manipulation of data structures and to
 develop proofs about such programs. Two sample proofs are given to
 demonstrate the technique, one for a tree sorting algorithm and one
 for a simple compiler for expressions.",
 paper = "Burs69.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Reynolds, John C.}
\begin{chunk}{axiom.bib}
@inbook{Reyn97,
 author = "Reynolds, John C.",
 title = {{The Essence of Algol}},
 booktitle = "ALGOLlike languages, Volume 1",
 publisher = "Birkhauser Boston Inc.",
 isbn = "0817638806",
 year = "1997",
 abstract =
 "Although Algol 60 has been uniquely in uential in programming
 language design, its descendants have been signicantly dierent than
 their prototype. In this paper, we enumerate the principles that we
 believe embody the essence of Algol, describe a model that satises
 these principles, and illustrate this model with a language that,
 while more uniform and general, retains the character of Algol.",
 paper = "Reyn97.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Sannella, Donald}
\begin{chunk}{axiom.bib}
@article{Sann86,
 author = "Sannella, Donald",
 title = {{Extended ML: An InstitutionIndependent Framework for
 Formal Program Development}},
 booktitle = "Proc. Workshop on Category Theory and Computer Programming",
 journal = "LNCS",
 volume = "240",
 pages = "364389",
 year = "1986",
 paper = "Sann86.pdf"
}

\end{chunk}

\index{Sannella, Donald}
\index{Tarlecki, Andrzej}
\begin{chunk}{axiom.bib}
@article{Sann87,
 author = "Sannella, Donald and Tarlecki, Andrzej",
 title = {{On Observational Equivalence and Algebraic Specification}},
 journal = "J. of Computer and System Sciences",
 volume = "34",
 pages = "150178",
 year = "1987"
 abstract =
 "The properties of a simple and natural notion of observational
 equivalence of algebras and the corresponding specificationbuilding
 operation are studied. We begin with a defmition of observational
 equivalence which is adequate to handle reachable algebras only, and
 show how to extend it to cope with unreachable algebras and also how
 it may be generalised to make sense under an arbitrary institution.
 Behavioural equivalence is treated as an important special case of
 observational equivalence, and its central role in program development
 is shown by means of an example.",
 paper = "Sann87.pdf"
}

\end{chunk}

\index{Sannella, Donald}
\index{Tarlecki, Andrzej}
\begin{chunk}{axiom.bib}
@article{Sann88,
 author = "Sannella, Donald and Tarlecki, Andrzej",
 title = {{Specifications in an Arbitrary Institution}},
 journal = "Information and Computation",
 volume = "76",
 pages = "165210",
 year = "1988",
 abstract =
 "A formalism for constructing and using axiomatic specifications in an
 arbitrary logical system is presented. This builds on the framework
 provided by Goguen and Burstall’s work on the notion of an institution
 as a formalisation of the concept of a logical system for writing
 specifications. We show how to introduce free variables into the
 sentences of an arbitrary institution and how to add quantitiers which
 bind them. We use this foundation to define a set of primitive
 operations for building specifications in an arbitrary institution
 based loosely on those in the ASL kernel specification language. We
 examine the set of operations which results when the definitions are
 instantiated in institutions of total and partial tirstorder logic
 and compare these with the operations found in existing specification
 languages. We present proof rules which allow proofs to be conducted
 in specifications built using the operations we define. Finally, we
 introduce a simple mechanism for defining and applying parameterised
 specifications and briefly discuss the program development process.",
 paper = "Sann88.pdf"
}

\end{chunk}

\index{Sannella, Donald}
\index{Tarlecki, Andrzej}
\begin{chunk}{axiom.bib}
@article{Sann89,
 author = "Sannella, Donald and Tarlecki, Andrzej",
 title = {{Toward Formal Development of ML Programs: Foundations and
 Methodology (Extended Abstract}}},
 year = "1989",
 abstract =
 "A methodology is presented for the formal development of modular
 Standard ML programs from specifications. Program development proceeds
 via a sequence of design (modular decomposition), coding and
 refinement steps. For each of these three kinds of step, conditions
 are given which ensure the correctness of the result. These conditions
 seem to be as weak as possible under the constraint of being
 expressible as local interface matching requirements.",
 paper = "Sann89.pdf"
}

\end{chunk}

\begin{chunk}{axiom.bib}
@article{SCSCP10,
 author = "Unknown",
 title = {{Symbolic Computation Software Composability Project and
 its implementations}},
 journal = "ACM Comm. in Computer Algebra",
 volume = "44",
 number = "4",
 year = "2010",
 keywords = "printed"
}

\end{chunk}

\index{Damas, Luis}
\index{Milner, Robin}
\begin{chunk}{axiom.bib}
@inproceedings{Dama82,
 author = "Damas, Luis and Milner, Robin",
 title = {{Principal Typeschemes for Functional Programs}},
 booktitle = "Proc POPL '82",
 year = "1982",
 pages = "207212",
 paper = "Dama82.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Mobarakeh, S. Saeidi}
\begin{chunk}{axiom.bib}
@misc{Moba09,
 author = "Mobarakeh, S. Saeidi",
 title = {{Type Inference Algorithms}},
 year = "2009",
 link = "\url{https://www.win.tue.nl/~hzantema/semssm.pdf}",
 abstract =
 "n this paper we are going to describe the Wand’s type inference
 algorithm and we’ll try to extend this algorithm with the notion of
 polymorphic let. By means of a type system, which we’re going to
 extend with some constraint language, we are able to extend the
 algorithm’s first phase (generation of equations) with
 letpolymorphism. The second phase of the algorithm (solving of the
 generated equations) needs some modifications to be done on standard
 unification algorithms because the new generated equation constructs
 can not directly be fed in to the standard unification algorithm. The
 correctness of the first phase of the algorithm is been proved by
 extending the Wand’s soundness and completeness prove. Finally a
 simple example is given to clarify the idea behind the algorithm.",
 paper = "Moba09.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Kreitz, Christoph}
\index{Rahli, Vincent}
\begin{chunk}{axiom.bib}
@techreport{Kreix11,
 author = "Kreitz, Christoph and Rahli, Vincent",
 title = {{Introduction to Classic ML}},
 type = "technical report",
 institution = "Cornell University",
 year = "2011",
 paper = "Krei11.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Lee, Daniel K.}
\index{Crary, Karl}
\index{Harper, Robert}
@inproceedings{Leex07,
 author = "Lee, Daniel K. and Crary, Karl and Harper, Robert",
 title = {{Towards a Mechanized Metatheory of Standard ML}},
 booktitle = "POPL'07",
 publisher = "ACM",
 year = "2007",
 abstract =
 "We present an internal language with equivalent expressive power to
 Standard ML, and discuss its formalization in LF and the
 machinechecked verification of its type safety in Twelf. The
 internal language is intended to serve as the target of elaboration in
 an elaborative semantics for Standard ML in the style of Harper and
 Stone. Therefore, it includes all the programming mechanisms
 necessary to implement Standard ML, including translucent modules,
 abstraction, polymorphism, higher kinds, references, exceptions,
 recursive types, and recursive functions. Our successful formalization
 of the proof involved a careful interplay between the precise
 formulations of the various mechanisms, and required the invention of
 new representation and proof techniques of general interest.",
 paper = "Leex07.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Tarditi, David}
\index{Morrisett, Greg}
\index{Cheng, Perry}
\index{Stone, Chris}
\index{Harper, Roboert}
\index{Lee, Peter}
\begin{chunk}{axiom.bib}
@inproceedings{Tard03,
 author = "Tarditi, David and Morrisett, Greg and Cheng, Perry and
 Stone, Chris and Harper, Roboert and Lee, Peter",
 title = {{TIL: A TypeDirected, Optimizing Compiler for ML}},
 booktitle = "20 Years of Prog. Lang. Design and Implementation",
 year = "2003",
 publisher = "ACM",
 isbn = "1581136234",
 paper = "Tard03.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Herman, David}
\begin{chunk}{axiom.bib}
@inproceedings{Herm07,
 author = "Herman, David",
 title = {{Functional Pearl: The Great Escape}},
 booktitle = "ICFP'07",
 publisher = "ACM",
 isbn = "1595938152",
 year = "2007",
 abstract =
 "Filinski showed that callcc and a single mutable reference cell
 are sufficient to express the delimited control operators shift
 and reset. However, this implementation interacts poorly with
 dynamic bindings like exception handlers. We present a variation
 on Filinski's encoding of delimited continuations that behaves
 appropriately in the presence of exceptions and give an
 implementation in Standard ML of New Jersey. We prove the encoding
 correct with respect to the semantics of delimited dynamic binding.",
 paper = "Herm07.pdf"
}

\end{chunk}

\index{Rossberg, Andreas}
\begin{chunk}{axiom.bib}
@misc{Ross13,
 author = "Rossberg, Andreas",
 title = {{HaMLet: To Be Or Not To Be Standard ML}},
 year = "2013",
 link = "\url{https://people.mpisws.org/~rossberg/hamlet}",
 paper = "Ross13.pdf",
 keywords = "printed"
}

\end{chunk}

\index{Kahrs, Stefan}
\index{Sannella, Donald}
\index{Tarlecki, Andrzej}
\begin{chunk}{axiom.bib}
@article{Kahr97,
 author = "Kahrs, Stefan and Sannella, Donald and Tarlecki, Andrzej",
 title = {{The Definition of Extended ML}},
 journal = "Theoretical Computer Science",
 volume = "173",
 pages = "445484",
 year = "1997",
 abstract =
 "This document formally defines the syntax and semantics of the
 Extended ML language. It is based on the published semantics of
 Standard ML in an attempt to ensure compatibility between the two
 languages.",
 paper = "Kahr97.pdf"
}

\end{chunk}

\index{Kahrs, Stefan}
\index{Sannella, Donald}
\begin{chunk}{axiom.bib}
@article{Kahr98,
 author = "Kahrs, Stefan and Sannella, Donald",
 title = {{Reflections on the Design of a Specification Language}},
 journal = "LNCS",
 volume = "1382",
 pages = "154170",
 abstract =
 "We reflect on our experiences from work on the design and
 semantic underpinnings of Extended ML, a specification language
 which supports the specification and formal development of
 Standard ML programs. Our aim is to isolate problems and issues
 that are intrinsic to the general enterprise of designing a
 specification language for use with a given programming language.
 Consequently the lessons learned go far beyond our original aim of
 designing a specification language for ML.",
 paper = "Karh98.pdf",
 keywords = "printed"
}

\end{chunk}
\ No newline at end of file
+add section
diff git a/src/axiomwebsite/patches.html b/src/axiomwebsite/patches.html
index b003331..1539019 100644
 a/src/axiomwebsite/patches.html
+++ b/src/axiomwebsite/patches.html
@@ 5972,6 +5972,8 @@ books/bookvolbib add references
src/input/kernel.input add additional Waldek examples
20181211.01.tpd.patch
books/bookvolbib add references
+20181211.02.tpd.patch
+books/bookvol13 add section

1.9.1