aboutsummaryrefslogtreecommitdiffhomepage
path: root/doc/sphinx/language/cic.rst
diff options
context:
space:
mode:
Diffstat (limited to 'doc/sphinx/language/cic.rst')
-rw-r--r--doc/sphinx/language/cic.rst1881
1 files changed, 1881 insertions, 0 deletions
diff --git a/doc/sphinx/language/cic.rst b/doc/sphinx/language/cic.rst
new file mode 100644
index 000000000..2695c5eee
--- /dev/null
+++ b/doc/sphinx/language/cic.rst
@@ -0,0 +1,1881 @@
+\chapter[Calculus of Inductive Constructions]{Calculus of Inductive Constructions
+\label{Cic}
+\index{Cic@\textsc{CIC}}
+\index{Calculus of Inductive Constructions}}
+%HEVEA\cutname{cic.html}
+
+The underlying formal language of {\Coq} is a {\em Calculus of
+Inductive Constructions} (\CIC) whose inference rules are presented in
+this chapter. The history of this formalism as well as pointers to related work
+are provided in a separate chapter; see {\em Credits}.
+
+\section[The terms]{The terms\label{Terms}}
+
+The expressions of the {\CIC} are {\em terms} and all terms have a {\em type}.
+There are types for functions (or
+programs), there are atomic types (especially datatypes)... but also
+types for proofs and types for the types themselves.
+Especially, any object handled in the formalism must belong to a
+type. For instance, universal quantification is relative to a type and
+takes the form {\it ``for all x
+of type T, P''}. The expression {\it ``x of type T''} is
+written {\it ``x:T''}. Informally, {\it ``x:T''} can be thought as
+{\it ``x belongs to T''}.
+
+The types of types are {\em sorts}. Types and sorts are themselves
+terms so that terms, types and sorts are all components of a common
+syntactic language of terms which is described in
+Section~\ref{cic:terms} but, first, we describe sorts.
+
+\subsection[Sorts]{Sorts\label{Sorts}
+\index{Sorts}}
+All sorts have a type and there is an infinite well-founded
+typing hierarchy of sorts whose base sorts are {\Prop} and {\Set}.
+
+The sort {\Prop} intends to be the type of logical propositions. If
+$M$ is a logical proposition then it denotes the class of terms
+representing proofs of $M$. An object $m$ belonging to $M$ witnesses
+the fact that $M$ is provable. An object of type {\Prop} is called a
+proposition.
+
+The sort {\Set} intends to be the type of small sets. This includes data
+types such as booleans and naturals, but also products, subsets, and
+function types over these data types.
+
+{\Prop} and {\Set} themselves can be manipulated as ordinary
+terms. Consequently they also have a type. Because assuming simply
+that {\Set} has type {\Set} leads to an inconsistent theory~\cite{Coq86}, the
+language of {\CIC} has infinitely many sorts. There are, in addition
+to {\Set} and {\Prop} a hierarchy of universes {\Type$(i)$} for any
+integer $i$.
+
+Like {\Set}, all of the sorts {\Type$(i)$} contain small sets such as
+booleans, natural numbers, as well as products, subsets and function
+types over small sets. But, unlike {\Set}, they also contain large
+sets, namely the sorts {\Set} and {\Type$(j)$} for $j<i$, and all
+products, subsets and function types over these sorts.
+
+Formally, we call {\Sort} the set of sorts which is defined by:
+\index{Type@{\Type}}%
+\index{Prop@{\Prop}}%
+\index{Set@{\Set}}%
+\[\Sort \equiv \{\Prop,\Set,\Type(i)\;|\; i \in \NN\} \]
+Their properties, such as:
+{\Prop:\Type$(1)$}, {\Set:\Type$(1)$}, and {\Type$(i)$:\Type$(i+1)$},
+are defined in Section~\ref{subtyping-rules}.
+
+The user does not have to mention explicitly the index $i$ when referring to
+the universe \Type$(i)$. One only writes \Type. The
+system itself generates for each instance of \Type\ a new
+index for the universe and checks that the constraints between these
+indexes can be solved. From the user point of view we consequently
+have {\Type}:{\Type}.
+We shall make precise in the typing rules the constraints between the
+indexes.
+
+\paragraph{Implementation issues}
+In practice, the {\Type} hierarchy is implemented using
+{\em algebraic universes}\index{algebraic universe}.
+An algebraic universe $u$ is either a variable (a qualified
+identifier with a number) or a successor of an algebraic universe (an
+expression $u+1$), or an upper bound of algebraic universes (an
+expression $max(u_1,...,u_n)$), or the base universe (the expression
+$0$) which corresponds, in the arity of template polymorphic inductive
+types (see Section \ref{Template-polymorphism}),
+to the predicative sort {\Set}. A graph of constraints between
+the universe variables is maintained globally. To ensure the existence
+of a mapping of the universes to the positive integers, the graph of
+constraints must remain acyclic. Typing expressions that violate the
+acyclicity of the graph of constraints results in a \errindex{Universe
+inconsistency} error (see also Section~\ref{PrintingUniverses}).
+
+%% HH: This looks to me more like source of confusion than helpful
+
+%% \subsection{Constants}
+
+%% Constants refers to
+%% objects in the global environment. These constants may denote previously
+%% defined objects, but also objects related to inductive definitions
+%% (either the type itself or one of its constructors or destructors).
+
+%% \medskip\noindent {\bf Remark. } In other presentations of \CIC,
+%% the inductive objects are not seen as
+%% external declarations but as first-class terms. Usually the
+%% definitions are also completely ignored. This is a nice theoretical
+%% point of view but not so practical. An inductive definition is
+%% specified by a possibly huge set of declarations, clearly we want to
+%% share this specification among the various inductive objects and not
+%% to duplicate it. So the specification should exist somewhere and the
+%% various objects should refer to it. We choose one more level of
+%% indirection where the objects are just represented as constants and
+%% the environment gives the information on the kind of object the
+%% constant refers to.
+
+%% \medskip
+%% Our inductive objects will be manipulated as constants declared in the
+%% environment. This roughly corresponds to the way they are actually
+%% implemented in the \Coq\ system. It is simple to map this presentation
+%% in a theory where inductive objects are represented by terms.
+
+\subsection{Terms}
+\label{cic:terms}
+
+Terms are built from sorts, variables, constants,
+%constructors, inductive types,
+abstractions, applications, local definitions,
+%case analysis, fixpoints, cofixpoints
+and products.
+From a syntactic point of view, types cannot be distinguished from terms,
+except that they cannot start by an abstraction or a constructor.
+More precisely the language of the {\em Calculus of Inductive
+ Constructions} is built from the following rules.
+%
+\begin{enumerate}
+\item the sorts {\Set}, {\Prop}, ${\Type(i)}$ are terms.
+\item variables, hereafter ranged over by letters $x$, $y$, etc., are terms
+\item constants, hereafter ranged over by letters $c$, $d$, etc., are terms.
+%\item constructors, hereafter ranged over by letter $C$, are terms.
+%\item inductive types, hereafter ranged over by letter $I$, are terms.
+\item\index{products} if $x$ is a variable and $T$, $U$ are terms then $\forall~x:T,U$
+ ($\kw{forall}~x:T,~U$ in \Coq{} concrete syntax) is a term. If $x$
+ occurs in $U$, $\forall~x:T,U$ reads as {\it ``for all x of type T,
+ U''}. As $U$ depends on $x$, one says that $\forall~x:T,U$ is a
+ {\em dependent product}. If $x$ does not occur in $U$ then
+ $\forall~x:T,U$ reads as {\it ``if T then U''}. A {\em non dependent
+ product} can be written: $T \ra U$.
+\item if $x$ is a variable and $T$, $u$ are terms then $\lb x:T \mto u$
+ ($\kw{fun}~x:T~ {\tt =>}~ u$ in \Coq{} concrete syntax) is a term. This is a
+ notation for the $\lambda$-abstraction of
+ $\lambda$-calculus\index{lambda-calculus@$\lambda$-calculus}
+ \cite{Bar81}. The term $\lb x:T \mto u$ is a function which maps
+ elements of $T$ to the expression $u$.
+\item if $t$ and $u$ are terms then $(t\ u)$ is a term
+ ($t~u$ in \Coq{} concrete syntax). The term $(t\
+ u)$ reads as {\it ``t applied to u''}.
+\item if $x$ is a variable, and $t$, $T$ and $u$ are terms then
+ $\kw{let}~x:=t:T~\kw{in}~u$ is a
+ term which denotes the term $u$ where the variable $x$ is locally
+ bound to $t$ of type $T$. This stands for the common ``let-in''
+ construction of functional programs such as ML or Scheme.
+%\item case ...
+%\item fixpoint ...
+%\item cofixpoint ...
+\end{enumerate}
+
+\paragraph{Free variables.}
+The notion of free variables is defined as usual. In the expressions
+$\lb x:T\mto U$ and $\forall x:T, U$ the occurrences of $x$ in $U$
+are bound.
+
+\paragraph[Substitution.]{Substitution.\index{Substitution}}
+The notion of substituting a term $t$ to free occurrences of a
+variable $x$ in a term $u$ is defined as usual. The resulting term
+is written $\subst{u}{x}{t}$.
+
+\paragraph[The logical vs programming readings.]{The logical vs programming readings.}
+
+The constructions of the {\CIC} can be used to express both logical
+and programming notions, accordingly to the Curry-Howard
+correspondence between proofs and programs, and between propositions
+and types~\cite{Cur58,How80,Bru72}.
+
+For instance, let us assume that \nat\ is the type of natural numbers
+with zero element written $0$ and that ${\tt True}$ is the always true
+proposition. Then $\ra$ is used both to denote $\nat\ra\nat$ which is
+the type of functions from \nat\ to \nat, to denote ${\tt True}\ra{\tt
+ True}$ which is an implicative proposition, to denote $\nat \ra
+\Prop$ which is the type of unary predicates over the natural numbers,
+etc.
+
+Let us assume that ${\tt mult}$ is a function of type $\nat\ra\nat\ra
+\nat$ and ${\tt eqnat}$ a predicate of type $\nat\ra\nat\ra \Prop$.
+The $\lambda$-abstraction can serve to build ``ordinary'' functions as
+in $\lambda x:\nat.({\tt mult}~x~x)$ (i.e. $\kw{fun}~x:\nat ~{\tt =>}~
+{\tt mult} ~x~x$ in {\Coq} notation) but may build also predicates
+over the natural numbers. For instance $\lambda x:\nat.({\tt eqnat}~
+x~0)$ (i.e. $\kw{fun}~x:\nat ~{\tt =>}~ {\tt eqnat}~ x~0$ in {\Coq}
+notation) will represent the predicate of one variable $x$ which
+asserts the equality of $x$ with $0$. This predicate has type $\nat
+\ra \Prop$ and it can be applied to any expression of type ${\nat}$,
+say $t$, to give an object $P~t$ of type \Prop, namely a proposition.
+
+Furthermore $\kw{forall}~x:\nat,\,P\;x$ will represent the type of
+functions which associate to each natural number $n$ an object of type
+$(P~n)$ and consequently represent the type of proofs of the formula
+``$\forall x.\,P(x)$''.
+
+\section[Typing rules]{Typing rules\label{Typed-terms}}
+
+As objects of type theory, terms are subjected to {\em type
+discipline}. The well typing of a term depends on
+a global environment and a local context.
+
+\paragraph{Local context.\index{Local context}}
+A {\em local context} is an ordered list of
+{\em local declarations\index{declaration!local}} of names which we call {\em variables\index{variable}}.
+The declaration of some variable $x$ is
+either a {\em local assumption\index{assumption!local}}, written $x:T$ ($T$ is a type) or a {\em local definition\index{definition!local}},
+written $x:=t:T$. We use brackets to write local contexts. A
+typical example is $[x:T;y:=u:U;z:V]$. Notice that the variables
+declared in a local context must be distinct. If $\Gamma$ declares some $x$,
+we write $x \in \Gamma$. By writing $(x:T) \in \Gamma$ we mean that
+either $x:T$ is an assumption in $\Gamma$ or that there exists some $t$ such
+that $x:=t:T$ is a definition in $\Gamma$. If $\Gamma$ defines some
+$x:=t:T$, we also write $(x:=t:T) \in \Gamma$.
+For the rest of the chapter, the $\Gamma::(y:T)$ denotes the local context
+$\Gamma$ enriched with the local assumption $y:T$.
+Similarly, $\Gamma::(y:=t:T)$ denotes the local context
+$\Gamma$ enriched with the local definition $(y:=t:T)$.
+The notation $[]$ denotes the empty local context.
+By $\Gamma_1; \Gamma_2$ we mean concatenation of the local context $\Gamma_1$
+and the local context $\Gamma_2$.
+
+% Does not seem to be used further...
+% Si dans l'explication WF(E)[Gamma] concernant les constantes
+% definies ds un contexte
+
+%We define the inclusion of two local contexts $\Gamma$ and $\Delta$ (written
+%as $\Gamma \subset \Delta$) as the property, for all variable $x$,
+%type $T$ and term $t$, if $(x:T) \in \Gamma$ then $(x:T) \in \Delta$
+%and if $(x:=t:T) \in \Gamma$ then $(x:=t:T) \in \Delta$.
+%We write
+% $|\Delta|$ for the length of the context $\Delta$, that is for the number
+% of declarations (assumptions or definitions) in $\Delta$.
+
+\paragraph[Global environment.]{Global environment.\index{Global environment}}
+%Because we are manipulating global declarations (global constants and global
+%assumptions), we also need to consider a global environment $E$.
+
+A {\em global environment} is an ordered list of {\em global declarations\index{declaration!global}}.
+Global declarations are either {\em global assumptions\index{assumption!global}} or {\em global
+definitions\index{definition!global}}, but also declarations of inductive objects. Inductive objects themselves declare both inductive or coinductive types and constructors
+(see Section~\ref{Cic-inductive-definitions}).
+
+A {\em global assumption} will be represented in the global environment as
+$(c:T)$ which assumes the name $c$ to be of some type $T$.
+A {\em global definition} will
+be represented in the global environment as $c:=t:T$ which defines
+the name $c$ to have value $t$ and type $T$.
+We shall call such names {\em constants}.
+For the rest of the chapter, the $E;c:T$ denotes the global environment
+$E$ enriched with the global assumption $c:T$.
+Similarly, $E;c:=t:T$ denotes the global environment
+$E$ enriched with the global definition $(c:=t:T)$.
+
+The rules for inductive definitions (see Section
+\ref{Cic-inductive-definitions}) have to be considered as assumption
+rules to which the following definitions apply: if the name $c$ is
+declared in $E$, we write $c \in E$ and if $c:T$ or $c:=t:T$ is
+declared in $E$, we write $(c : T) \in E$.
+
+\paragraph[Typing rules.]{Typing rules.\label{Typing-rules}\index{Typing rules}}
+In the following, we define simultaneously two
+judgments. The first one \WTEG{t}{T} means the term $t$ is well-typed
+and has type $T$ in the global environment $E$ and local context $\Gamma$. The
+second judgment \WFE{\Gamma} means that the global environment $E$ is
+well-formed and the local context $\Gamma$ is a valid local context in this
+global environment.
+% HH: This looks to me complicated. I think it would be better to talk
+% about ``discharge'' as a transformation of global environments,
+% rather than as keeping a local context next to global constants.
+%
+%% It also means a third property which makes sure that any
+%%constant in $E$ was defined in an environment which is included in
+%%$\Gamma$
+%%\footnote{This requirement could be relaxed if we instead introduced
+%% an explicit mechanism for instantiating constants. At the external
+%% level, the Coq engine works accordingly to this view that all the
+%% definitions in the environment were built in a local sub-context of the
+%% current local context.}.
+
+A term $t$ is well typed in a global environment $E$ iff there exists a
+local context $\Gamma$ and a term $T$ such that the judgment \WTEG{t}{T} can
+be derived from the following rules.
+\begin{description}
+\item[W-Empty] \inference{\WF{[]}{}}
+\item[W-Local-Assum] % Ce n'est pas vrai : x peut apparaitre plusieurs fois dans Gamma
+\inference{\frac{\WTEG{T}{s}~~~~s \in \Sort~~~~x \not\in \Gamma % \cup E
+ }{\WFE{\Gamma::(x:T)}}}
+\item[W-Local-Def]
+\inference{\frac{\WTEG{t}{T}~~~~x \not\in \Gamma % \cup E
+ }{\WFE{\Gamma::(x:=t:T)}}}
+\item[W-Global-Assum] \inference{\frac{\WTE{}{T}{s}~~~~s \in \Sort~~~~c \notin E}
+ {\WF{E;c:T}{}}}
+\item[W-Global-Def] \inference{\frac{\WTE{}{t}{T}~~~c \notin E}
+ {\WF{E;c:=t:T}{}}}
+\item[Ax-Prop] \index{Typing rules!Ax-Prop}
+\inference{\frac{\WFE{\Gamma}}{\WTEG{\Prop}{\Type(1)}}}
+\item[Ax-Set] \index{Typing rules!Ax-Set}
+\inference{\frac{\WFE{\Gamma}}{\WTEG{\Set}{\Type(1)}}}
+\item[Ax-Type] \index{Typing rules!Ax-Type}
+\inference{\frac{\WFE{\Gamma}}{\WTEG{\Type(i)}{\Type(i+1)}}}
+\item[Var]\index{Typing rules!Var}
+ \inference{\frac{ \WFE{\Gamma}~~~~~(x:T) \in \Gamma~~\mbox{or}~~(x:=t:T) \in \Gamma~\mbox{for some $t$}}{\WTEG{x}{T}}}
+\item[Const] \index{Typing rules!Const}
+\inference{\frac{\WFE{\Gamma}~~~~(c:T) \in E~~\mbox{or}~~(c:=t:T) \in E~\mbox{for some $t$} }{\WTEG{c}{T}}}
+\item[Prod-Prop] \index{Typing rules!Prod-Prop}
+\inference{\frac{\WTEG{T}{s}~~~~s \in \Sort~~~
+ \WTE{\Gamma::(x:T)}{U}{\Prop}}
+ { \WTEG{\forall~x:T,U}{\Prop}}}
+\item[Prod-Set] \index{Typing rules!Prod-Set}
+\inference{\frac{\WTEG{T}{s}~~~~s \in\{\Prop, \Set\}~~~~~~
+ \WTE{\Gamma::(x:T)}{U}{\Set}}
+ { \WTEG{\forall~x:T,U}{\Set}}}
+\item[Prod-Type] \index{Typing rules!Prod-Type}
+\inference{\frac{\WTEG{T}{\Type(i)}~~~~
+ \WTE{\Gamma::(x:T)}{U}{\Type(i)}}
+ {\WTEG{\forall~x:T,U}{\Type(i)}}}
+\item[Lam]\index{Typing rules!Lam}
+\inference{\frac{\WTEG{\forall~x:T,U}{s}~~~~ \WTE{\Gamma::(x:T)}{t}{U}}
+ {\WTEG{\lb x:T\mto t}{\forall x:T, U}}}
+\item[App]\index{Typing rules!App}
+ \inference{\frac{\WTEG{t}{\forall~x:U,T}~~~~\WTEG{u}{U}}
+ {\WTEG{(t\ u)}{\subst{T}{x}{u}}}}
+\item[Let]\index{Typing rules!Let}
+\inference{\frac{\WTEG{t}{T}~~~~ \WTE{\Gamma::(x:=t:T)}{u}{U}}
+ {\WTEG{\letin{x}{t:T}{u}}{\subst{U}{x}{t}}}}
+\end{description}
+
+\Rem Prod$_1$ and Prod$_2$ typing-rules make sense if we consider the semantic
+difference between {\Prop} and {\Set}:
+\begin{itemize}
+ \item All values of a type that has a sort {\Set} are extractable.
+ \item No values of a type that has a sort {\Prop} are extractable.
+\end{itemize}
+
+\Rem We may have $\kw{let}~x:=t:T~\kw{in}~u$
+well-typed without having $((\lb x:T\mto u)~t)$ well-typed (where
+$T$ is a type of $t$). This is because the value $t$ associated to $x$
+may be used in a conversion rule (see Section~\ref{conv-rules}).
+
+\section[Conversion rules]{Conversion rules\index{Conversion rules}
+\label{conv-rules}}
+
+In \CIC, there is an internal reduction mechanism. In particular, it
+can decide if two programs are {\em intentionally} equal (one
+says {\em convertible}). Convertibility is described in this section.
+
+\paragraph[$\beta$-reduction.]{$\beta$-reduction.\label{beta}\index{beta-reduction@$\beta$-reduction}}
+
+We want to be able to identify some terms as we can identify the
+application of a function to a given argument with its result. For
+instance the identity function over a given type $T$ can be written
+$\lb x:T\mto x$. In any global environment $E$ and local context $\Gamma$, we want to identify any object $a$ (of type $T$) with the
+application $((\lb x:T\mto x)~a)$. We define for this a {\em reduction} (or a
+{\em conversion}) rule we call $\beta$:
+\[ \WTEGRED{((\lb x:T\mto
+ t)~u)}{\triangleright_{\beta}}{\subst{t}{x}{u}} \]
+We say that $\subst{t}{x}{u}$ is the {\em $\beta$-contraction} of
+$((\lb x:T\mto t)~u)$ and, conversely, that $((\lb x:T\mto t)~u)$
+is the {\em $\beta$-expansion} of $\subst{t}{x}{u}$.
+
+According to $\beta$-reduction, terms of the {\em Calculus of
+ Inductive Constructions} enjoy some fundamental properties such as
+confluence, strong normalization, subject reduction. These results are
+theoretically of great importance but we will not detail them here and
+refer the interested reader to \cite{Coq85}.
+
+\paragraph[$\iota$-reduction.]{$\iota$-reduction.\label{iota}\index{iota-reduction@$\iota$-reduction}}
+A specific conversion rule is associated to the inductive objects in
+the global environment. We shall give later on (see Section~\ref{iotared}) the
+precise rules but it just says that a destructor applied to an object
+built from a constructor behaves as expected. This reduction is
+called $\iota$-reduction and is more precisely studied in
+\cite{Moh93,Wer94}.
+
+
+\paragraph[$\delta$-reduction.]{$\delta$-reduction.\label{delta}\index{delta-reduction@$\delta$-reduction}}
+
+We may have variables defined in local contexts or constants defined in the global
+environment. It is legal to identify such a reference with its value,
+that is to expand (or unfold) it into its value. This
+reduction is called $\delta$-reduction and shows as follows.
+
+$$\WTEGRED{x}{\triangleright_{\delta}}{t}~~~~~\mbox{if $(x:=t:T) \in \Gamma$}~~~~~~~~~\WTEGRED{c}{\triangleright_{\delta}}{t}~~~~~\mbox{if $(c:=t:T) \in E$}$$
+
+
+\paragraph[$\zeta$-reduction.]{$\zeta$-reduction.\label{zeta}\index{zeta-reduction@$\zeta$-reduction}}
+
+{\Coq} allows also to remove local definitions occurring in terms by
+replacing the defined variable by its value. The declaration being
+destroyed, this reduction differs from $\delta$-reduction. It is
+called $\zeta$-reduction and shows as follows.
+
+$$\WTEGRED{\kw{let}~x:=u~\kw{in}~t}{\triangleright_{\zeta}}{\subst{t}{x}{u}}$$
+
+\paragraph{$\eta$-expansion.%
+\label{eta}%
+\index{eta-expansion@$\eta$-expansion}%
+%\index{eta-reduction@$\eta$-reduction}
+}%
+Another important concept is $\eta$-expansion. It is legal to identify any
+term $t$ of functional type $\forall x:T, U$ with its so-called
+$\eta$-expansion $\lb x:T\mto (t\ x)$ for $x$ an arbitrary variable
+name fresh in $t$.
+
+\Rem We deliberately do not define $\eta$-reduction:
+\begin{latexonly}%
+ $$\lb x:T\mto (t\ x)\not\triangleright_\eta\hskip.3em t$$
+\end{latexonly}%
+\begin{htmlonly}
+ $$\lb x:T\mto (t\ x)~\not\triangleright_\eta~t$$
+\end{htmlonly}
+This is because, in general, the type of $t$ need not to be convertible to the type of $\lb x:T\mto (t\ x)$.
+E.g., if we take $f$ such that:
+\begin{latexonly}%
+ $$f\hskip.5em:\hskip.5em\forall x:Type(2),Type(1)$$
+\end{latexonly}%
+\begin{htmlonly}
+ $$f~:~\forall x:Type(2),Type(1)$$
+\end{htmlonly}
+then
+\begin{latexonly}%
+ $$\lb x:Type(1),(f\, x)\hskip.5em:\hskip.5em\forall x:Type(1),Type(1)$$
+\end{latexonly}%
+\begin{htmlonly}
+ $$\lb x:Type(1),(f\, x)~:~\forall x:Type(1),Type(1)$$
+\end{htmlonly}
+We could not allow
+\begin{latexonly}%
+ $$\lb x:Type(1),(f\,x)\hskip.4em\not\triangleright_\eta\hskip.6em f$$
+\end{latexonly}%
+\begin{htmlonly}
+ $$\lb x:Type(1),(f\,x)~\not\triangleright_\eta~f$$
+\end{htmlonly}
+because the type of the reduced term $\forall x:Type(2),Type(1)$
+would not be convertible to the type of the original term $\forall x:Type(1),Type(1)$.
+
+\paragraph[Convertibility.]{Convertibility.\label{convertibility}
+\index{beta-reduction@$\beta$-reduction}\index{iota-reduction@$\iota$-reduction}\index{delta-reduction@$\delta$-reduction}\index{zeta-reduction@$\zeta$-reduction}}
+
+Let us write $\WTEGRED{t}{\triangleright}{u}$ for the contextual closure of the relation $t$ reduces to $u$ in the global environment $E$ and local context $\Gamma$ with one of the previous reduction $\beta$, $\iota$, $\delta$ or $\zeta$.
+
+We say that two terms $t_1$ and $t_2$ are {\em
+ $\beta\iota\delta\zeta\eta$-convertible}, or simply {\em
+ convertible}, or {\em equivalent}, in the global environment $E$ and
+local context $\Gamma$ iff there exist terms $u_1$ and $u_2$ such that
+$\WTEGRED{t_1}{\triangleright \ldots \triangleright}{u_1}$ and
+$\WTEGRED{t_2}{\triangleright \ldots \triangleright}{u_2}$ and either
+$u_1$ and $u_2$ are identical, or they are convertible up to
+$\eta$-expansion, i.e. $u_1$ is $\lb x:T\mto u'_1$ and $u_2\,x$ is
+recursively convertible to $u'_1$, or, symmetrically, $u_2$ is $\lb
+x:T\mto u'_2$ and $u_1\,x$ is recursively convertible to $u'_2$. We
+then write $\WTEGCONV{t_1}{t_2}$.
+
+Apart from this we consider two instances of polymorphic and cumulative (see Chapter~\ref{Universes-full}) inductive types (see below)
+convertible $\WTEGCONV{t\ w_1 \dots w_m}{t\ w_1' \dots w_m'}$ if we have subtypings (see below) in both directions, i.e.,
+$\WTEGLECONV{t\ w_1 \dots w_m}{t\ w_1' \dots w_m'}$ and $\WTEGLECONV{t\ w_1' \dots w_m'}{t\ w_1 \dots w_m}$.
+Furthermore, we consider $\WTEGCONV{c\ v_1 \dots v_m}{c'\ v_1' \dots v_m'}$ convertible if $\WTEGCONV{v_i}{v_i'}$
+and we have that $c$ and $c'$ are the same constructors of different instances the same inductive types (differing only in universe levels)
+such that $\WTEG{c\ v_1 \dots v_m}{t\ w_1 \dots w_m}$ and $\WTEG{c'\ v_1' \dots v_m'}{t'\ w_1' \dots w_m'}$ and we have $\WTEGCONV{t\ w_1 \dots w_m}{t\ w_1' \dots w_m'}$.
+
+The convertibility relation allows introducing a new typing rule
+which says that two convertible well-formed types have the same
+inhabitants.
+
+\section[Subtyping rules]{Subtyping rules\index{Subtyping rules}
+\label{subtyping-rules}}
+
+At the moment, we did not take into account one rule between universes
+which says that any term in a universe of index $i$ is also a term in
+the universe of index $i+1$ (this is the {\em cumulativity} rule of
+{\CIC}). This property extends the equivalence relation of
+convertibility into a {\em subtyping} relation inductively defined by:
+\begin{enumerate}
+\item if $\WTEGCONV{t}{u}$ then $\WTEGLECONV{t}{u}$,
+\item if $i \leq j$ then $\WTEGLECONV{\Type(i)}{\Type(j)}$,
+\item for any $i$, $\WTEGLECONV{\Set}{\Type(i)}$,
+\item $\WTEGLECONV{\Prop}{\Set}$, hence, by transitivity,
+ $\WTEGLECONV{\Prop}{\Type(i)}$, for any $i$
+\item if $\WTEGCONV{T}{U}$ and $\WTELECONV{\Gamma::(x:T)}{T'}{U'}$ then $\WTEGLECONV{\forall~x:T, T'}{\forall~x:U, U'}$.
+\item if $\Ind{}{p}{\Gamma_I}{\Gamma_C}$ is a universe polymorphic and cumulative (see Chapter~\ref{Universes-full})
+ inductive type (see below) and $(t : \forall\Gamma_P,\forall\Gamma_{\mathit{Arr}(t)}, \Sort)\in\Gamma_I$
+ and $(t' : \forall\Gamma_P',\forall\Gamma_{\mathit{Arr}(t)}', \Sort')\in\Gamma_I$
+ are two different instances of \emph{the same} inductive type (differing only in universe levels) with constructors
+ \[[c_1: \forall\Gamma_P,\forall T_{1,1} \dots T_{1,n_1},t\ v_{1,1} \dots v_{1,m}; \dots; c_k: \forall\Gamma_P,\forall T_{k, 1} \dots T_{k,n_k},t\ v_{n,1}\dots v_{n,m}]\]
+ and
+ \[[c_1: \forall\Gamma_P',\forall T_{1,1}' \dots T_{1,n_1}',t'\ v_{1,1}' \dots v_{1,m}'; \dots; c_k: \forall\Gamma_P',\forall T_{k, 1}' \dots T_{k,n_k}',t\ v_{n,1}'\dots v_{n,m}']\]
+ respectively then $\WTEGLECONV{t\ w_1 \dots w_m}{t\ w_1' \dots w_m'}$ (notice that $t$ and $t'$ are both fully applied, i.e., they have a sort as a type)
+ if $\WTEGCONV{w_i}{w_i'}$ for $1 \le i \le m$ and we have
+ \[ \WTEGLECONV{T_{i,j}}{T_{i,j}'} \text{ and } \WTEGLECONV{A_i}{A_i'}\]
+ where $\Gamma_{\mathit{Arr}(t)} = [a_1 : A_1; a_1 : A_l]$ and $\Gamma_{\mathit{Arr}(t)} = [a_1 : A_1'; a_1 : A_l']$.
+\end{enumerate}
+
+The conversion rule up to subtyping is now exactly:
+
+\begin{description}\label{Conv}
+\item[Conv]\index{Typing rules!Conv}
+ \inference{
+ \frac{\WTEG{U}{s}~~~~\WTEG{t}{T}~~~~\WTEGLECONV{T}{U}}{\WTEG{t}{U}}}
+ \end{description}
+
+
+\paragraph[Normal form.]{Normal form.\index{Normal form}\label{Normal-form}\label{Head-normal-form}\index{Head normal form}}
+A term which cannot be any more reduced is said to be in {\em normal
+ form}. There are several ways (or strategies) to apply the reduction
+rules. Among them, we have to mention the {\em head reduction} which
+will play an important role (see Chapter~\ref{Tactics}). Any term can
+be written as $\lb x_1:T_1\mto \ldots \lb x_k:T_k \mto
+(t_0\ t_1\ldots t_n)$ where
+$t_0$ is not an application. We say then that $t_0$ is the {\em head
+ of $t$}. If we assume that $t_0$ is $\lb x:T\mto u_0$ then one step of
+$\beta$-head reduction of $t$ is:
+\[\lb x_1:T_1\mto \ldots \lb x_k:T_k\mto (\lb x:T\mto u_0\ t_1\ldots t_n)
+~\triangleright ~ \lb (x_1:T_1)\ldots(x_k:T_k)\mto
+(\subst{u_0}{x}{t_1}\ t_2 \ldots t_n)\]
+Iterating the process of head reduction until the head of the reduced
+term is no more an abstraction leads to the {\em $\beta$-head normal
+ form} of $t$:
+\[ t \triangleright \ldots \triangleright
+\lb x_1:T_1\mto \ldots\lb x_k:T_k\mto (v\ u_1
+\ldots u_m)\]
+where $v$ is not an abstraction (nor an application). Note that the
+head normal form must not be confused with the normal form since some
+$u_i$ can be reducible.
+%
+Similar notions of head-normal forms involving $\delta$, $\iota$ and $\zeta$
+reductions or any combination of those can also be defined.
+
+\section[Inductive definitions]{Inductive Definitions\label{Cic-inductive-definitions}}
+
+% Here we assume that the reader knows what is an inductive definition.
+
+Formally, we can represent any {\em inductive definition\index{definition!inductive}} as \Ind{}{p}{\Gamma_I}{\Gamma_C} where:
+\begin{itemize}
+ \item $\Gamma_I$ determines the names and types of inductive types;
+ \item $\Gamma_C$ determines the names and types of constructors of these inductive types;
+ \item $p$ determines the number of parameters of these inductive types.
+\end{itemize}
+These inductive definitions, together with global assumptions and global definitions, then form the global environment.
+%
+Additionally, for any $p$ there always exists $\Gamma_P=[a_1:A_1;\dots;a_p:A_p]$
+such that each $T$ in $(t:T)\in\Gamma_I\cup\Gamma_C$ can be written as:
+$\forall\Gamma_P, T^\prime$ where $\Gamma_P$ is called the {\em context of parameters\index{context of parameters}}.
+Furthermore, we must have that each $T$ in $(t:T)\in\Gamma_I$ can be written as:
+$\forall\Gamma_P,\forall\Gamma_{\mathit{Arr}(t)}, \Sort$ where $\Gamma_{\mathit{Arr}(t)}$ is called the
+{\em Arity} of the inductive type\index{arity of inductive type} $t$ and
+$\Sort$ is called the sort of the inductive type $t$.
+
+\paragraph{Examples}
+
+ \newcommand\ind[3]{$\mathsf{Ind}~[#1]\left(\hskip-.4em
+ \begin{array}{r@{\mathrm{~:=~}}l}
+ #2 & #3 \\
+ \end{array}
+ \hskip-.4em
+ \right)$}
+ \def\colon{@{\hskip.5em:\hskip.5em}}
+
+The declaration for parameterized lists is:
+\begin{latexonly}
+ \vskip.5em
+
+ \ind{1}{[\List:\Set\ra\Set]}{\left[\begin{array}{r@{:}l}
+ \Nil & \forall A:\Set,\List~A \\
+ \cons & \forall A:\Set, A \ra \List~A \ra \List~A
+ \end{array}
+ \right]}
+ \vskip.5em
+\end{latexonly}
+\begin{rawhtml}<pre><table style="border-spacing:0">
+ <tr style="vertical-align:middle">
+ <td style="width:10pt;text-align:center;font-family:sans-serif;font-style:italic">Ind</td>
+ <td style="width:20pt;text-align:center">[1]</td>
+ <td style="width:5pt;text-align:center">⎛<br>⎝</td>
+ <td style="width:120pt;text-align:center">[ <span style="font-family:monospace">list : Set → Set</span> ]</td>
+ <td style="width:20pt;text-align:center;font-family:monospace">:=</td>
+ <td style="width:10pt;text-align:center">⎡<br>⎣</td>
+ <td>
+ <table style="border-spacing:0">
+ <tr>
+ <td style="width:20pt;text-align:right;font-family:monospace">nil</td>
+ <td style="width:20pt;text-align:center;font-family:monospace">:=</td>
+ <td style="text-align:left;font-family:monospace">∀A : Set, list A</td>
+ </tr>
+ <tr>
+ <td style="width:20pt;text-align:right;font-family:monospace">cons</td>
+ <td style="width:20pt;text-align:center;font-family:monospace">:=</td>
+ <td style="text-align:left;font-family:monospace">∀A : Set, A → list A → list A</td>
+ </tr>
+ </table>
+ </td>
+ <td style="width:10pt;text-align:center">⎤<br>⎦</td>
+ <td style="width:5pt;text-align:center">⎞<br>⎠</td>
+ </tr>
+</table></pre>
+\end{rawhtml}
+\noindent which corresponds to the result of the \Coq\ declaration:
+\begin{coq_example*}
+Inductive list (A:Set) : Set :=
+ | nil : list A
+ | cons : A -> list A -> list A.
+\end{coq_example*}
+
+\noindent The declaration for a mutual inductive definition of {\tree} and {\forest} is:
+\begin{latexonly}
+ \vskip.5em
+\ind{~}{\left[\begin{array}{r@{:}l}\tree&\Set\\\forest&\Set\end{array}\right]}
+ {\left[\begin{array}{r@{:}l}
+ \node & \forest \ra \tree\\
+ \emptyf & \forest\\
+ \consf & \tree \ra \forest \ra \forest\\
+ \end{array}\right]}
+ \vskip.5em
+\end{latexonly}
+\begin{rawhtml}<pre><table style="border-spacing:0">
+ <tr style="vertical-align:middle">
+ <td style="width:10pt;text-align:center;font-family:sans-serif;font-style:italic">Ind</td>
+ <td style="width:20pt;text-align:center">[1]</td>
+ <td style="width:5pt;text-align:center">⎛<br>⎜<br>⎝</td>
+ <td style="width:10pt;text-align:center">⎡<br>⎣</td>
+ <td>
+ <table style="border-spacing:0">
+ <tr>
+ <td style="width:20pt;text-align:right;font-family:monospace">tree</td>
+ <td style="width:20pt;text-align:center;font-family:monospace">:</td>
+ <td style="text-align:left;font-family:monospace">Set</td>
+ </tr>
+ <tr>
+ <td style="width:20pt;text-align:right;font-family:monospace">forest</td>
+ <td style="width:20pt;text-align:center;font-family:monospace">:</td>
+ <td style="text-align:left;font-family:monospace">Set</td>
+ </tr>
+ </table>
+ </td>
+ <td style="width:10pt;text-align:center">⎤<br>⎦</td>
+ <td style="width:20pt;text-align:center;font-family:monospace">:=</td>
+ <td style="width:10pt;text-align:center">⎡<br>⎢<br>⎣</td>
+ <td>
+ <table style="border-spacing:0">
+ <tr>
+ <td style="width:20pt;text-align:right;font-family:monospace">node</td>
+ <td style="width:20pt;text-align:center;font-family:monospace">:</td>
+ <td style="text-align:left;font-family:monospace">forest → tree</td>
+ </tr>
+ <tr>
+ <td style="width:20pt;text-align:right;font-family:monospace">emptyf</td>
+ <td style="width:20pt;text-align:center;font-family:monospace">:</td>
+ <td style="text-align:left;font-family:monospace">forest</td>
+ </tr>
+ <tr>
+ <td style="width:20pt;text-align:right;font-family:monospace">consf</td>
+ <td style="width:20pt;text-align:center;font-family:monospace">:</td>
+ <td style="text-align:left;font-family:monospace">tree → forest → forest</td>
+ </tr>
+ </table>
+ </td>
+ <td style="width:10pt;text-align:center">⎤<br>⎥<br>⎦</td>
+ <td style="width:5pt;text-align:center">⎞<br>⎟<br>⎠</td>
+ </tr>
+</table></pre>
+\end{rawhtml}
+\noindent which corresponds to the result of the \Coq\
+declaration:
+\begin{coq_example*}
+Inductive tree : Set :=
+ node : forest -> tree
+with forest : Set :=
+ | emptyf : forest
+ | consf : tree -> forest -> forest.
+\end{coq_example*}
+
+\noindent The declaration for a mutual inductive definition of {\even} and {\odd} is:
+\begin{latexonly}
+ \newcommand\GammaI{\left[\begin{array}{r@{:}l}
+ \even & \nat\ra\Prop \\
+ \odd & \nat\ra\Prop
+ \end{array}
+ \right]}
+ \newcommand\GammaC{\left[\begin{array}{r@{:}l}
+ \evenO & \even~\nO \\
+ \evenS & \forall n : \nat, \odd~n \ra \even~(\nS~n)\\
+ \oddS & \forall n : \nat, \even~n \ra \odd~(\nS~n)
+ \end{array}
+ \right]}
+ \vskip.5em
+ \ind{1}{\GammaI}{\GammaC}
+ \vskip.5em
+\end{latexonly}
+\begin{rawhtml}<pre><table style="border-spacing:0">
+ <tr style="vertical-align:middle">
+ <td style="width:10pt;text-align:center;font-family:sans-serif;font-style:italic">Ind</td>
+ <td style="width:20pt;text-align:center">[1]</td>
+ <td style="width:5pt;text-align:center">⎛<br>⎜<br>⎝</td>
+ <td style="width:10pt;text-align:center">⎡<br>⎣</td>
+ <td>
+ <table style="border-spacing:0">
+ <tr>
+ <td style="width:20pt;text-align:right;font-family:monospace">even</td>
+ <td style="width:20pt;text-align:center;font-family:monospace">:</td>
+ <td style="text-align:left;font-family:monospace">nat → Prop</td>
+ </tr>
+ <tr>
+ <td style="width:20pt;text-align:right;font-family:monospace">odd</td>
+ <td style="width:20pt;text-align:center;font-family:monospace">:</td>
+ <td style="text-align:left;font-family:monospace">nat → Prop</td>
+ </tr>
+ </table>
+ </td>
+ <td style="width:10pt;text-align:center">⎤<br>⎦</td>
+ <td style="width:20pt;text-align:center;font-family:monospace">:=</td>
+ <td style="width:10pt;text-align:center">⎡<br>⎢<br>⎣</td>
+ <td>
+ <table style="border-spacing:0">
+ <tr>
+ <td style="width:20pt;text-align:right;font-family:monospace">even_O</td>
+ <td style="width:20pt;text-align:center;font-family:monospace">:</td>
+ <td style="text-align:left;font-family:monospace">even O</td>
+ </tr>
+ <tr>
+ <td style="width:20pt;text-align:right;font-family:monospace">even_S</td>
+ <td style="width:20pt;text-align:center;font-family:monospace">:</td>
+ <td style="text-align:left;font-family:monospace">∀n : nat, odd n → even (S n)</td>
+ </tr>
+ <tr>
+ <td style="width:20pt;text-align:right;font-family:monospace">odd_S</td>
+ <td style="width:20pt;text-align:center;font-family:monospace">:</td>
+ <td style="text-align:left;font-family:monospace">∀n : nat, even n → odd (S n)</td>
+ </tr>
+ </table>
+ </td>
+ <td style="width:10pt;text-align:center">⎤<br>⎥<br>⎦</td>
+ <td style="width:5pt;text-align:center">⎞<br>⎟<br>⎠</td>
+ </tr>
+</table></pre>
+\end{rawhtml}
+\noindent which corresponds to the result of the \Coq\
+declaration:
+\begin{coq_example*}
+Inductive even : nat -> Prop :=
+ | even_O : even 0
+ | even_S : forall n, odd n -> even (S n)
+with odd : nat -> Prop :=
+ | odd_S : forall n, even n -> odd (S n).
+\end{coq_example*}
+
+\subsection{Types of inductive objects}
+We have to give the type of constants in a global environment $E$ which
+contains an inductive declaration.
+
+\begin{description}
+\item[Ind] \index{Typing rules!Ind}
+ \inference{\frac{\WFE{\Gamma}~~~~~~~~\Ind{}{p}{\Gamma_I}{\Gamma_C} \in E~~~~~~~~(a:A)\in\Gamma_I}{\WTEG{a}{A}}}
+\item[Constr] \index{Typing rules!Constr}
+ \inference{\frac{\WFE{\Gamma}~~~~~~~~\Ind{}{p}{\Gamma_I}{\Gamma_C} \in E~~~~~~~~(c:C)\in\Gamma_C}{\WTEG{c}{C}}}
+\end{description}
+
+\begin{latexonly}%
+\paragraph{Example.}
+Provided that our environment $E$ contains inductive definitions we showed before,
+these two inference rules above enable us to conclude that:
+\vskip.5em
+\newcommand\prefix{E[\Gamma]\vdash\hskip.25em}
+$\begin{array}{@{}l}
+ \prefix\even : \nat\ra\Prop\\
+ \prefix\odd : \nat\ra\Prop\\
+ \prefix\evenO : \even~\nO\\
+ \prefix\evenS : \forall~n:\nat, \odd~n \ra \even~(\nS~n)\\
+ \prefix\oddS : \forall~n:\nat, \even~n \ra \odd~(\nS~n)
+ \end{array}$
+\end{latexonly}%
+
+%\paragraph{Parameters.}
+%%The parameters introduce a distortion between the inside specification
+%%of the inductive declaration where parameters are supposed to be
+%%instantiated (this representation is appropriate for checking the
+%%correctness or deriving the destructor principle) and the outside
+%%typing rules where the inductive objects are seen as objects
+%%abstracted with respect to the parameters.
+
+%In the definition of \List\ or \haslength\, $A$ is a parameter because
+%what is effectively inductively defined is $\ListA$ or $\haslengthA$ for
+%a given $A$ which is constant in the type of constructors. But when
+%we define $(\haslengthA~l~n)$, $l$ and $n$ are not parameters because the
+%constructors manipulate different instances of this family.
+
+\subsection{Well-formed inductive definitions}
+We cannot accept any inductive declaration because some of them lead
+to inconsistent systems.
+We restrict ourselves to definitions which
+satisfy a syntactic criterion of positivity. Before giving the formal
+rules, we need a few definitions:
+
+\paragraph[Definition]{Definition\index{Arity}\label{Arity}}
+A type $T$ is an {\em arity of sort $s$} if it converts
+to the sort $s$ or to a product $\forall~x:T,U$ with $U$ an arity
+of sort $s$.
+
+\paragraph[Examples]{Examples}
+$A\ra \Set$ is an arity of sort $\Set$.
+$\forall~A:\Prop,A\ra \Prop$ is an arity of sort \Prop.
+
+\paragraph[Definition]{Definition}
+A type $T$ is an {\em arity} if there is a $s\in\Sort$
+such that $T$ is an arity of sort $s$.
+
+\paragraph[Examples]{Examples}
+$A\ra \Set$ and $\forall~A:\Prop,A\ra \Prop$ are arities.
+
+\paragraph[Definition]{Definition\index{type of constructor}}
+We say that $T$ is a {\em type of constructor of $I$\index{type of constructor}}
+in one of the following two cases:
+\begin{itemize}
+ \item $T$ is $(I~t_1\ldots ~t_n)$
+ \item $T$ is $\forall x:U,T^\prime$ where $T^\prime$ is also a type of constructor of $I$
+\end{itemize}
+
+\paragraph[Examples]{Examples}
+$\nat$ and $\nat\ra\nat$ are types of constructors of $\nat$.\\
+$\forall A:\Type,\List~A$ and $\forall A:\Type,A\ra\List~A\ra\List~A$ are constructors of $\List$.
+
+\paragraph[Definition]{Definition\index{Positivity}\label{Positivity}}
+The type of constructor $T$ will be said to {\em satisfy the positivity
+condition} for a constant $X$ in the following cases:
+
+\begin{itemize}
+\item $T=(X~t_1\ldots ~t_n)$ and $X$ does not occur free in
+any $t_i$
+\item $T=\forall~x:U,V$ and $X$ occurs only strictly positively in $U$ and
+the type $V$ satisfies the positivity condition for $X$
+\end{itemize}
+%
+The constant $X$ {\em occurs strictly positively} in $T$ in the
+following cases:
+%
+\begin{itemize}
+\item $X$ does not occur in $T$
+\item $T$ converts to $(X~t_1 \ldots ~t_n)$ and $X$ does not occur in
+ any of $t_i$
+\item $T$ converts to $\forall~x:U,V$ and $X$ does not occur in
+ type $U$ but occurs strictly positively in type $V$
+\item $T$ converts to $(I~a_1 \ldots ~a_m ~ t_1 \ldots ~t_p)$ where
+ $I$ is the name of an inductive declaration of the form
+ $\Ind{\Gamma}{m}{I:A}{c_1:\forall p_1:P_1,\ldots \forall
+ p_m:P_m,C_1;\ldots;c_n:\forall p_1:P_1,\ldots \forall
+ p_m:P_m,C_n}$
+ (in particular, it is not mutually defined and it has $m$
+ parameters) and $X$ does not occur in any of the $t_i$, and the
+ (instantiated) types of constructor $C_i\{p_j/a_j\}_{j=1\ldots m}$
+ of $I$ satisfy
+ the nested positivity condition for $X$
+%\item more generally, when $T$ is not a type, $X$ occurs strictly
+%positively in $T[x:U]u$ if $X$ does not occur in $U$ but occurs
+%strictly positively in $u$
+\end{itemize}
+%
+The type of constructor $T$ of $I$ {\em satisfies the nested
+positivity condition} for a constant $X$ in the following
+cases:
+
+\begin{itemize}
+\item $T=(I~b_1\ldots b_m~u_1\ldots ~u_{p})$, $I$ is an inductive
+ definition with $m$ parameters and $X$ does not occur in
+any $u_i$
+\item $T=\forall~x:U,V$ and $X$ occurs only strictly positively in $U$ and
+the type $V$ satisfies the nested positivity condition for $X$
+\end{itemize}
+
+\newcommand\vv{\textSFxi} % │
+\newcommand\hh{\textSFx} % ─
+\newcommand\vh{\textSFviii} % ├
+\newcommand\hv{\textSFii} % └
+\newlength\framecharacterwidth
+\settowidth\framecharacterwidth{\hh}
+\newcommand\ws{\hbox{}\hskip\the\framecharacterwidth}
+\newcommand\ruleref[1]{\hskip.25em\dots\hskip.2em{\em (bullet #1)}}
+\newcommand{\NatTree}{\mbox{\textsf{nattree}}}
+\newcommand{\NatTreeA}{\mbox{\textsf{nattree}}~\ensuremath{A}}
+\newcommand{\cnode}{\mbox{\textsf{node}}}
+\newcommand{\cleaf}{\mbox{\textsf{leaf}}}
+
+\noindent For instance, if one considers the following variant of a tree type branching over the natural numbers
+
+\begin{verbatim}
+Inductive nattree (A:Type) : Type :=
+ | leaf : nattree A
+ | node : A -> (nat -> nattree A) -> nattree A
+\end{verbatim}
+
+\begin{latexonly}
+\noindent Then every instantiated constructor of $\NatTreeA$ satisfies the nested positivity condition for $\NatTree$\\
+\noindent
+\ws\ws\vv\\
+\ws\ws\vh\hh\ws concerning type $\NatTreeA$ of constructor $\cleaf$:\\
+\ws\ws\vv\ws\ws\ws\ws Type $\NatTreeA$ of constructor $\cleaf$ satisfies the positivity condition for $\NatTree$\\
+\ws\ws\vv\ws\ws\ws\ws because $\NatTree$ does not appear in any (real) arguments of the type of that constructor\\
+\ws\ws\vv\ws\ws\ws\ws (primarily because $\NatTree$ does not have any (real) arguments)\ruleref1\\
+\ws\ws\vv\\
+\ws\ws\hv\hh\ws concerning type $\forall~A\ra(\NN\ra\NatTreeA)\ra\NatTreeA$ of constructor $\cnode$:\\
+ \ws\ws\ws\ws\ws\ws\ws Type $\forall~A:\Type,A\ra(\NN\ra\NatTreeA)\ra\NatTreeA$ of constructor $\cnode$\\
+\ws\ws\ws\ws\ws\ws\ws satisfies the positivity condition for $\NatTree$ because:\\
+\ws\ws\ws\ws\ws\ws\ws\vv\\
+\ws\ws\ws\ws\ws\ws\ws\vh\hh\ws $\NatTree$ occurs only strictly positively in $\Type$\ruleref1\\
+\ws\ws\ws\ws\ws\ws\ws\vv\\
+\ws\ws\ws\ws\ws\ws\ws\vh\hh\ws $\NatTree$ occurs only strictly positively in $A$\ruleref1\\
+\ws\ws\ws\ws\ws\ws\ws\vv\\
+ \ws\ws\ws\ws\ws\ws\ws\vh\hh\ws $\NatTree$ occurs only strictly positively in $\NN\ra\NatTreeA$\ruleref{3+2}\\
+\ws\ws\ws\ws\ws\ws\ws\vv\\
+\ws\ws\ws\ws\ws\ws\ws\hv\hh\ws $\NatTree$ satisfies the positivity condition for $\NatTreeA$\ruleref1
+\end{latexonly}
+\begin{rawhtml}
+<pre>
+<span style="font-family:serif">Then every instantiated constructor of <span style="font-family:monospace">nattree A</span> satisfies the nested positivity condition for <span style="font-family:monospace">nattree</span></span>
+ │
+ ├─ <span style="font-family:serif">concerning type <span style="font-family:monospace">nattree A</span> of constructor <span style="font-family:monospace">nil</span>:</span>
+ │ <span style="font-family:serif">Type <span style="font-family:monospace">nattree A</span> of constructor <span style="font-family:monospace">nil</span> satisfies the positivity condition for <span style="font-family:monospace">nattree</span></span>
+ │ <span style="font-family:serif">because <span style="font-family:monospace">nattree</span> does not appear in any (real) arguments of the type of that constructor</span>
+ │ <span style="font-family:serif">(primarily because nattree does not have any (real) arguments) ... <span style="font-style:italic">(bullet 1)</span></span>
+ │
+ ╰─ <span style="font-family:serif">concerning type <span style="font-family:monospace">∀ A → (nat → nattree A) → nattree A</span> of constructor <span style="font-family:monospace">cons</span>:</span>
+ <span style="font-family:serif">Type <span style="font-family:monospace">∀ A : Type, A → (nat → nattree A) → nattree A</span> of constructor <span style="font-family:monospace">cons</span></span>
+ <span style="font-family:serif">satisfies the positivity condition for <span style="font-family:monospace">nattree</span> because:</span>
+ │
+ ├─ <span style="font-family:serif"><span style="font-family:monospace">nattree</span> occurs only strictly positively in <span style="font-family:monospace">Type</span> ... <span style="font-style:italic">(bullet 1)</span></span>
+ │
+ ├─ <span style="font-family:serif"><span style="font-family:monospace">nattree</span> occurs only strictly positively in <span style="font-family:monospace">A</span> ... <span style="font-style:italic">(bullet 1)</span></span>
+ │
+ ├─ <span style="font-family:serif"><span style="font-family:monospace">nattree</span> occurs only strictly positively in <span style="font-family:monospace">nat → nattree A</span> ... <span style="font-style:italic">(bullet 3+2)</span></span>
+ │
+ ╰─ <span style="font-family:serif"><span style="font-family:monospace">nattree</span> satisfies the positivity condition for <span style="font-family:monospace">nattree A</span> ... <span style="font-style:italic">(bullet 1)</span></span>
+</pre>
+\end{rawhtml}
+
+\paragraph{Correctness rules.}
+We shall now describe the rules allowing the introduction of a new
+inductive definition.
+
+\begin{description}
+\item[W-Ind] Let $E$ be a global environment and
+ $\Gamma_P,\Gamma_I,\Gamma_C$ are contexts such that
+ $\Gamma_I$ is $[I_1:\forall \Gamma_P,A_1;\ldots;I_k:\forall
+ \Gamma_P,A_k]$ and $\Gamma_C$ is
+ $[c_1:\forall \Gamma_P,C_1;\ldots;c_n:\forall \Gamma_P,C_n]$.
+\inference{
+ \frac{
+ (\WTE{\Gamma_P}{A_j}{s'_j})_{j=1\ldots k}
+ ~~~~~~~~ (\WTE{\Gamma_I;\Gamma_P}{C_i}{s_{q_i}})_{i=1\ldots n}
+}
+ {\WF{E;\Ind{}{p}{\Gamma_I}{\Gamma_C}}{\Gamma}}}
+provided that the following side conditions hold:
+\begin{itemize}
+\item $k>0$ and all of $I_j$ and $c_i$ are distinct names for $j=1\ldots k$ and $i=1\ldots n$,
+\item $p$ is the number of parameters of \NInd{}{\Gamma_I}{\Gamma_C}
+ and $\Gamma_P$ is the context of parameters,
+\item for $j=1\ldots k$ we have that $A_j$ is an arity of sort $s_j$ and $I_j
+ \notin E$,
+\item for $i=1\ldots n$ we have that $C_i$ is a type of constructor of
+ $I_{q_i}$ which satisfies the positivity condition for $I_1 \ldots I_k$
+ and $c_i \notin \Gamma \cup E$.
+\end{itemize}
+\end{description}
+One can remark that there is a constraint between the sort of the
+arity of the inductive type and the sort of the type of its
+constructors which will always be satisfied for the impredicative sort
+{\Prop} but may fail to define inductive definition
+on sort \Set{} and generate constraints between universes for
+inductive definitions in the {\Type} hierarchy.
+
+\paragraph{Examples.}
+It is well known that existential quantifier can be encoded as an
+inductive definition.
+The following declaration introduces the second-order existential
+quantifier $\exists X.P(X)$.
+\begin{coq_example*}
+Inductive exProp (P:Prop->Prop) : Prop :=
+ exP_intro : forall X:Prop, P X -> exProp P.
+\end{coq_example*}
+The same definition on \Set{} is not allowed and fails:
+% (********** The following is not correct and should produce **********)
+% (*** Error: Large non-propositional inductive types must be in Type***)
+\begin{coq_example}
+Fail Inductive exSet (P:Set->Prop) : Set :=
+ exS_intro : forall X:Set, P X -> exSet P.
+\end{coq_example}
+It is possible to declare the same inductive definition in the
+universe \Type.
+The \texttt{exType} inductive definition has type $(\Type_i \ra\Prop)\ra
+\Type_j$ with the constraint that the parameter \texttt{X} of \texttt{exT\_intro} has type $\Type_k$ with $k<j$ and $k\leq i$.
+\begin{coq_example*}
+Inductive exType (P:Type->Prop) : Type :=
+ exT_intro : forall X:Type, P X -> exType P.
+\end{coq_example*}
+%We shall assume for the following definitions that, if necessary, we
+%annotated the type of constructors such that we know if the argument
+%is recursive or not. We shall write the type $(x:_R T)C$ if it is
+%a recursive argument and $(x:_P T)C$ if the argument is not recursive.
+
+\paragraph[Template polymorphism.]{Template polymorphism.\index{Template polymorphism}}
+\label{Template-polymorphism}
+
+Inductive types declared in {\Type} are
+polymorphic over their arguments in {\Type}.
+If $A$ is an arity of some sort and $s$ is a sort, we write $A_{/s}$ for the arity
+obtained from $A$ by replacing its sort with $s$. Especially, if $A$
+is well-typed in some global environment and local context, then $A_{/s}$ is typable
+by typability of all products in the Calculus of Inductive Constructions.
+The following typing rule is added to the theory.
+
+\begin{description}
+\item[Ind-Family] Let $\Ind{}{p}{\Gamma_I}{\Gamma_C}$ be an
+ inductive definition. Let $\Gamma_P = [p_1:P_1;\ldots;p_{p}:P_{p}]$
+ be its context of parameters, $\Gamma_I = [I_1:\forall
+ \Gamma_P,A_1;\ldots;I_k:\forall \Gamma_P,A_k]$ its context of
+ definitions and $\Gamma_C = [c_1:\forall
+ \Gamma_P,C_1;\ldots;c_n:\forall \Gamma_P,C_n]$ its context of
+ constructors, with $c_i$ a constructor of $I_{q_i}$.
+
+ Let $m \leq p$ be the length of the longest prefix of parameters
+ such that the $m$ first arguments of all occurrences of all $I_j$ in
+ all $C_k$ (even the occurrences in the hypotheses of $C_k$) are
+ exactly applied to $p_1~\ldots~p_m$ ($m$ is the number of {\em
+ recursively uniform parameters} and the $p-m$ remaining parameters
+ are the {\em recursively non-uniform parameters}). Let $q_1$,
+ \ldots, $q_r$, with $0\leq r\leq m$, be a (possibly) partial
+ instantiation of the recursively uniform parameters of
+ $\Gamma_P$. We have:
+
+\inference{\frac
+{\left\{\begin{array}{l}
+\Ind{}{p}{\Gamma_I}{\Gamma_C} \in E\\
+(E[] \vdash q_l : P'_l)_{l=1\ldots r}\\
+(\WTELECONV{}{P'_l}{\subst{P_l}{p_u}{q_u}_{u=1\ldots l-1}})_{l=1\ldots r}\\
+1 \leq j \leq k
+\end{array}
+\right.}
+{E[] \vdash I_j\,q_1\,\ldots\,q_r:\forall [p_{r+1}:P_{r+1};\ldots;p_{p}:P_{p}], (A_j)_{/s_j}}
+}
+
+provided that the following side conditions hold:
+
+\begin{itemize}
+\item $\Gamma_{P'}$ is the context obtained from $\Gamma_P$ by
+replacing each $P_l$ that is an arity with $P'_l$ for $1\leq l \leq r$ (notice that
+$P_l$ arity implies $P'_l$ arity since $\WTELECONV{}{P'_l}{ \subst{P_l}{p_u}{q_u}_{u=1\ldots l-1}}$);
+\item there are sorts $s_i$, for $1 \leq i \leq k$ such that, for
+ $\Gamma_{I'} = [I_1:\forall
+ \Gamma_{P'},(A_1)_{/s_1};\ldots;I_k:\forall \Gamma_{P'},(A_k)_{/s_k}]$
+we have $(\WTE{\Gamma_{I'};\Gamma_{P'}}{C_i}{s_{q_i}})_{i=1\ldots n}$;
+\item the sorts $s_i$ are such that all eliminations, to {\Prop}, {\Set} and
+ $\Type(j)$, are allowed (see Section~\ref{allowedeleminationofsorts}).
+\end{itemize}
+\end{description}
+%
+Notice that if $I_j\,q_1\,\ldots\,q_r$ is typable using the rules {\bf
+Ind-Const} and {\bf App}, then it is typable using the rule {\bf
+Ind-Family}. Conversely, the extended theory is not stronger than the
+theory without {\bf Ind-Family}. We get an equiconsistency result by
+mapping each $\Ind{}{p}{\Gamma_I}{\Gamma_C}$ occurring into a
+given derivation into as many different inductive types and constructors
+as the number of different (partial) replacements of sorts, needed for
+this derivation, in the parameters that are arities (this is possible
+because $\Ind{}{p}{\Gamma_I}{\Gamma_C}$ well-formed implies
+that $\Ind{}{p}{\Gamma_{I'}}{\Gamma_{C'}}$ is well-formed and
+has the same allowed eliminations, where
+$\Gamma_{I'}$ is defined as above and $\Gamma_{C'} = [c_1:\forall
+\Gamma_{P'},C_1;\ldots;c_n:\forall \Gamma_{P'},C_n]$). That is,
+the changes in the types of each partial instance
+$q_1\,\ldots\,q_r$ can be characterized by the ordered sets of arity
+sorts among the types of parameters, and to each signature is
+associated a new inductive definition with fresh names. Conversion is
+preserved as any (partial) instance $I_j\,q_1\,\ldots\,q_r$ or
+$C_i\,q_1\,\ldots\,q_r$ is mapped to the names chosen in the specific
+instance of $\Ind{}{p}{\Gamma_I}{\Gamma_C}$.
+
+\newcommand{\Single}{\mbox{\textsf{Set}}}
+
+In practice, the rule {\bf Ind-Family} is used by {\Coq} only when all the
+inductive types of the inductive definition are declared with an arity whose
+sort is in the $\Type$
+hierarchy. Then, the polymorphism is over the parameters whose
+type is an arity of sort in the {\Type} hierarchy.
+The sort $s_j$ are
+chosen canonically so that each $s_j$ is minimal with respect to the
+hierarchy ${\Prop}\subset{\Set_p}\subset\Type$ where $\Set_p$ is
+predicative {\Set}.
+%and ${\Prop_u}$ is the sort of small singleton
+%inductive types (i.e. of inductive types with one single constructor
+%and that contains either proofs or inhabitants of singleton types
+%only).
+More precisely, an empty or small singleton inductive definition
+(i.e. an inductive definition of which all inductive types are
+singleton -- see paragraph~\ref{singleton}) is set in
+{\Prop}, a small non-singleton inductive type is set in {\Set} (even
+in case {\Set} is impredicative -- see Section~\ref{impredicativity}),
+and otherwise in the {\Type} hierarchy.
+
+Note that the side-condition about allowed elimination sorts in the
+rule~{\bf Ind-Family} is just to avoid to recompute the allowed
+elimination sorts at each instance of a pattern-matching (see
+section~\ref{elimdep}).
+As an example, let us consider the following definition:
+\begin{coq_example*}
+Inductive option (A:Type) : Type :=
+| None : option A
+| Some : A -> option A.
+\end{coq_example*}
+%
+As the definition is set in the {\Type} hierarchy, it is used
+polymorphically over its parameters whose types are arities of a sort
+in the {\Type} hierarchy. Here, the parameter $A$ has this property,
+hence, if \texttt{option} is applied to a type in {\Set}, the result is
+in {\Set}. Note that if \texttt{option} is applied to a type in {\Prop},
+then, the result is not set in \texttt{Prop} but in \texttt{Set}
+still. This is because \texttt{option} is not a singleton type (see
+section~\ref{singleton}) and it would lose the elimination to {\Set} and
+{\Type} if set in {\Prop}.
+
+\begin{coq_example}
+Check (fun A:Set => option A).
+Check (fun A:Prop => option A).
+\end{coq_example}
+%
+Here is another example.
+%
+\begin{coq_example*}
+Inductive prod (A B:Type) : Type := pair : A -> B -> prod A B.
+\end{coq_example*}
+%
+As \texttt{prod} is a singleton type, it will be in {\Prop} if applied
+twice to propositions, in {\Set} if applied twice to at least one type
+in {\Set} and none in {\Type}, and in {\Type} otherwise. In all cases,
+the three kind of eliminations schemes are allowed.
+
+\begin{coq_example}
+Check (fun A:Set => prod A).
+Check (fun A:Prop => prod A A).
+Check (fun (A:Prop) (B:Set) => prod A B).
+Check (fun (A:Type) (B:Prop) => prod A B).
+\end{coq_example}
+
+\Rem Template polymorphism used to be called ``sort-polymorphism of
+inductive types'' before universe polymorphism (see
+Chapter~\ref{Universes-full}) was introduced.
+
+\subsection{Destructors}
+The specification of inductive definitions with arities and
+constructors is quite natural. But we still have to say how to use an
+object in an inductive type.
+
+This problem is rather delicate. There are actually several different
+ways to do that. Some of them are logically equivalent but not always
+equivalent from the computational point of view or from the user point
+of view.
+
+From the computational point of view, we want to be able to define a
+function whose domain is an inductively defined type by using a
+combination of case analysis over the possible constructors of the
+object and recursion.
+
+Because we need to keep a consistent theory and also we prefer to keep
+a strongly normalizing reduction, we cannot accept any sort of
+recursion (even terminating). So the basic idea is to restrict
+ourselves to primitive recursive functions and functionals.
+
+For instance, assuming a parameter $A:\Set$ exists in the local context, we
+want to build a function \length\ of type $\ListA\ra \nat$ which
+computes the length of the list, so such that $(\length~(\Nil~A)) = \nO$
+and $(\length~(\cons~A~a~l)) = (\nS~(\length~l))$. We want these
+equalities to be recognized implicitly and taken into account in the
+conversion rule.
+
+From the logical point of view, we have built a type family by giving
+a set of constructors. We want to capture the fact that we do not
+have any other way to build an object in this type. So when trying to
+prove a property about an object $m$ in an inductive definition it is
+enough to enumerate all the cases where $m$ starts with a different
+constructor.
+
+In case the inductive definition is effectively a recursive one, we
+want to capture the extra property that we have built the smallest
+fixed point of this recursive equation. This says that we are only
+manipulating finite objects. This analysis provides induction
+principles.
+For instance, in order to prove $\forall l:\ListA,(\haslengthA~l~(\length~l))$
+it is enough to prove:
+%
+\begin{itemize}
+ \item $(\haslengthA~(\Nil~A)~(\length~(\Nil~A)))$
+ \item $\forall a:A, \forall l:\ListA, (\haslengthA~l~(\length~l)) \ra\\
+ \ra (\haslengthA~(\cons~A~a~l)~(\length~(\cons~A~a~l)))$
+\end{itemize}
+%
+which given the conversion equalities satisfied by \length\ is the
+same as proving:
+%
+\begin{itemize}
+ \item $(\haslengthA~(\Nil~A)~\nO)$
+ \item $\forall a:A, \forall l:\ListA, (\haslengthA~l~(\length~l)) \ra\\
+ \ra (\haslengthA~(\cons~A~a~l)~(\nS~(\length~l)))$
+\end{itemize}
+%
+One conceptually simple way to do that, following the basic scheme
+proposed by Martin-L\"of in his Intuitionistic Type Theory, is to
+introduce for each inductive definition an elimination operator. At
+the logical level it is a proof of the usual induction principle and
+at the computational level it implements a generic operator for doing
+primitive recursion over the structure.
+
+But this operator is rather tedious to implement and use. We choose in
+this version of {\Coq} to factorize the operator for primitive recursion
+into two more primitive operations as was first suggested by Th. Coquand
+in~\cite{Coq92}. One is the definition by pattern-matching. The second one is a definition by guarded fixpoints.
+
+\subsubsection[The {\tt match\ldots with \ldots end} construction.]{The {\tt match\ldots with \ldots end} construction.\label{Caseexpr}
+\index{match@{\tt match\ldots with\ldots end}}}
+
+The basic idea of this operator is that we have an object
+$m$ in an inductive type $I$ and we want to prove a property
+which possibly depends on $m$. For this, it is enough to prove the
+property for $m = (c_i~u_1\ldots u_{p_i})$ for each constructor of $I$.
+The \Coq{} term for this proof will be written:
+\[\kw{match}~m~\kw{with}~ (c_1~x_{11}~...~x_{1p_1}) \Ra f_1 ~|~\ldots~|~
+ (c_n~x_{n1}~...~x_{np_n}) \Ra f_n~ \kw{end}\]
+In this expression, if
+$m$ eventually happens to evaluate to $(c_i~u_1\ldots u_{p_i})$ then
+the expression will behave as specified in its $i$-th branch and
+it will reduce to $f_i$ where the $x_{i1}$\ldots $x_{ip_i}$ are replaced
+by the $u_1\ldots u_{p_i}$ according to the $\iota$-reduction.
+
+Actually, for type-checking a \kw{match\ldots with\ldots end}
+expression we also need to know the predicate $P$ to be proved by case
+analysis. In the general case where $I$ is an inductively defined
+$n$-ary relation, $P$ is a predicate over $n+1$ arguments: the $n$ first ones
+correspond to the arguments of $I$ (parameters excluded), and the last
+one corresponds to object $m$. \Coq{} can sometimes infer this
+predicate but sometimes not. The concrete syntax for describing this
+predicate uses the \kw{as\ldots in\ldots return} construction. For
+instance, let us assume that $I$ is an unary predicate with one
+parameter and one argument. The predicate is made explicit using the syntax:
+\[\kw{match}~m~\kw{as}~ x~ \kw{in}~ I~\verb!_!~a~ \kw{return}~ P
+ ~\kw{with}~ (c_1~x_{11}~...~x_{1p_1}) \Ra f_1 ~|~\ldots~|~
+ (c_n~x_{n1}~...~x_{np_n}) \Ra f_n \kw{end}\]
+The \kw{as} part can be omitted if either the result type does not
+depend on $m$ (non-dependent elimination) or $m$ is a variable (in
+this case, $m$ can occur in $P$ where it is considered a bound variable).
+The \kw{in} part can be
+omitted if the result type does not depend on the arguments of
+$I$. Note that the arguments of $I$ corresponding to parameters
+\emph{must} be \verb!_!, because the result type is not generalized to
+all possible values of the parameters.
+The other arguments of $I$
+(sometimes called indices in the literature)
+% NOTE: e.g. http://www.qatar.cmu.edu/~sacchini/papers/types08.pdf
+have to be variables
+($a$ above) and these variables can occur in $P$.
+The expression after \kw{in}
+must be seen as an \emph{inductive type pattern}. Notice that
+expansion of implicit arguments and notations apply to this pattern.
+%
+For the purpose of presenting the inference rules, we use a more
+compact notation:
+\[ \Case{(\lb a x \mto P)}{m}{ \lb x_{11}~...~x_{1p_1} \mto f_1 ~|~\ldots~|~
+ \lb x_{n1}...x_{np_n} \mto f_n}\]
+
+%% CP 06/06 Obsolete avec la nouvelle syntaxe et incompatible avec la
+%% presentation theorique qui suit
+% \paragraph{Non-dependent elimination.}
+%
+% When defining a function of codomain $C$ by case analysis over an
+% object in an inductive type $I$, we build an object of type $I
+% \ra C$. The minimality principle on an inductively defined logical
+% predicate $I$ of type $A \ra \Prop$ is often used to prove a property
+% $\forall x:A,(I~x)\ra (C~x)$. These are particular cases of the dependent
+% principle that we stated before with a predicate which does not depend
+% explicitly on the object in the inductive definition.
+
+% For instance, a function testing whether a list is empty
+% can be
+% defined as:
+% \[\kw{fun} l:\ListA \Ra \kw{match}~l~\kw{with}~ \Nil \Ra \true~
+% |~(\cons~a~m) \Ra \false \kw{end}\]
+% represented by
+% \[\lb l:\ListA \mto\Case{\bool}{l}{\true~ |~ \lb a~m,~\false}\]
+%\noindent {\bf Remark. }
+
+% In the system \Coq\ the expression above, can be
+% written without mentioning
+% the dummy abstraction:
+% \Case{\bool}{l}{\Nil~ \mbox{\tt =>}~\true~ |~ (\cons~a~m)~
+% \mbox{\tt =>}~ \false}
+
+\paragraph[Allowed elimination sorts.]{Allowed elimination sorts.\index{Elimination sorts}}
+\label{allowedeleminationofsorts}
+
+An important question for building the typing rule for \kw{match} is
+what can be the type of $\lb a x \mto P$ with respect to the type of $m$. If
+$m:I$ and
+$I:A$ and
+$\lb a x \mto P : B$
+then by \compat{I:A}{B} we mean that one can use $\lb a x \mto P$ with $m$ in the above
+match-construct.
+
+\paragraph{Notations.}
+The \compat{I:A}{B} is defined as the smallest relation satisfying the
+following rules:
+We write \compat{I}{B} for \compat{I:A}{B} where $A$ is the type of
+$I$.
+
+The case of inductive definitions in sorts \Set\ or \Type{} is simple.
+There is no restriction on the sort of the predicate to be
+eliminated.
+%
+\begin{description}
+\item[Prod] \inference{\frac{\compat{(I~x):A'}{B'}}
+ {\compat{I:\forall x:A, A'}{\forall x:A, B'}}}
+\item[{\Set} \& \Type] \inference{\frac{
+ s_1 \in \{\Set,\Type(j)\}~~~~~~~~s_2 \in \Sort}{\compat{I:s_1}{I\ra s_2}}}
+\end{description}
+%
+The case of Inductive definitions of sort \Prop{} is a bit more
+complicated, because of our interpretation of this sort. The only
+harmless allowed elimination, is the one when predicate $P$ is also of
+sort \Prop.
+\begin{description}
+\item[\Prop] \inference{\compat{I:\Prop}{I\ra\Prop}}
+\end{description}
+\Prop{} is the type of logical propositions, the proofs of properties
+$P$ in \Prop{} could not be used for computation and are consequently
+ignored by the extraction mechanism.
+Assume $A$ and $B$ are two propositions, and the logical disjunction
+$A\vee B$ is defined inductively by:
+\begin{coq_example*}
+Inductive or (A B:Prop) : Prop :=
+ or_introl : A -> or A B | or_intror : B -> or A B.
+\end{coq_example*}
+The following definition which computes a boolean value by case over
+the proof of \texttt{or A B} is not accepted:
+% (***************************************************************)
+% (*** This example should fail with ``Incorrect elimination'' ***)
+\begin{coq_example}
+Fail Definition choice (A B: Prop) (x:or A B) :=
+ match x with or_introl _ _ a => true | or_intror _ _ b => false end.
+\end{coq_example}
+From the computational point of view, the structure of the proof of
+\texttt{(or A B)} in this term is needed for computing the boolean
+value.
+
+In general, if $I$ has type \Prop\ then $P$ cannot have type $I\ra
+\Set$, because it will mean to build an informative proof of type
+$(P~m)$ doing a case analysis over a non-computational object that
+will disappear in the extracted program. But the other way is safe
+with respect to our interpretation we can have $I$ a computational
+object and $P$ a non-computational one, it just corresponds to proving
+a logical property of a computational object.
+
+% Also if $I$ is in one of the sorts \{\Prop, \Set\}, one cannot in
+% general allow an elimination over a bigger sort such as \Type. But
+% this operation is safe whenever $I$ is a {\em small inductive} type,
+% which means that all the types of constructors of
+% $I$ are small with the following definition:\\
+% $(I~t_1\ldots t_s)$ is a {\em small type of constructor} and
+% $\forall~x:T,C$ is a small type of constructor if $C$ is and if $T$
+% has type \Prop\ or \Set. \index{Small inductive type}
+
+% We call this particular elimination which gives the possibility to
+% compute a type by induction on the structure of a term, a {\em strong
+% elimination}\index{Strong elimination}.
+
+In the same spirit, elimination on $P$ of type $I\ra
+\Type$ cannot be allowed because it trivially implies the elimination
+on $P$ of type $I\ra \Set$ by cumulativity. It also implies that there
+are two proofs of the same property which are provably different,
+contradicting the proof-irrelevance property which is sometimes a
+useful axiom:
+\begin{coq_example}
+Axiom proof_irrelevance : forall (P : Prop) (x y : P), x=y.
+\end{coq_example}
+\begin{coq_eval}
+Reset proof_irrelevance.
+\end{coq_eval}
+The elimination of an inductive definition of type \Prop\ on a
+predicate $P$ of type $I\ra \Type$ leads to a paradox when applied to
+impredicative inductive definition like the second-order existential
+quantifier \texttt{exProp} defined above, because it give access to
+the two projections on this type.
+
+%\paragraph{Warning: strong elimination}
+%\index{Elimination!Strong elimination}
+%In previous versions of Coq, for a small inductive definition, only the
+%non-informative strong elimination on \Type\ was allowed, because
+%strong elimination on \Typeset\ was not compatible with the current
+%extraction procedure. In this version, strong elimination on \Typeset\
+%is accepted but a dummy element is extracted from it and may generate
+%problems if extracted terms are explicitly used such as in the
+%{\tt Program} tactic or when extracting ML programs.
+
+\paragraph[Empty and singleton elimination]{Empty and singleton elimination\label{singleton}
+\index{Elimination!Singleton elimination}
+\index{Elimination!Empty elimination}}
+
+There are special inductive definitions in \Prop\ for which more
+eliminations are allowed.
+\begin{description}
+\item[\Prop-extended]
+\inference{
+ \frac{I \mbox{~is an empty or singleton
+ definition}~~~s \in \Sort}{\compat{I:\Prop}{I\ra s}}
+}
+\end{description}
+%
+% A {\em singleton definition} has always an informative content,
+% even if it is a proposition.
+%
+A {\em singleton
+definition} has only one constructor and all the arguments of this
+constructor have type \Prop. In that case, there is a canonical
+way to interpret the informative extraction on an object in that type,
+such that the elimination on any sort $s$ is legal. Typical examples are
+the conjunction of non-informative propositions and the equality.
+If there is an hypothesis $h:a=b$ in the local context, it can be used for
+rewriting not only in logical propositions but also in any type.
+% In that case, the term \verb!eq_rec! which was defined as an axiom, is
+% now a term of the calculus.
+\begin{coq_eval}
+Require Extraction.
+\end{coq_eval}
+\begin{coq_example}
+Print eq_rec.
+Extraction eq_rec.
+\end{coq_example}
+An empty definition has no constructors, in that case also,
+elimination on any sort is allowed.
+
+\paragraph{Type of branches.}
+Let $c$ be a term of type $C$, we assume $C$ is a type of constructor
+for an inductive type $I$. Let $P$ be a term that represents the
+property to be proved.
+We assume $r$ is the number of parameters and $p$ is the number of arguments.
+
+We define a new type \CI{c:C}{P} which represents the type of the
+branch corresponding to the $c:C$ constructor.
+\[
+\begin{array}{ll}
+\CI{c:(I~p_1\ldots p_r\ t_1 \ldots t_p)}{P} &\equiv (P~t_1\ldots ~t_p~c) \\[2mm]
+\CI{c:\forall~x:T,C}{P} &\equiv \forall~x:T,\CI{(c~x):C}{P}
+\end{array}
+\]
+We write \CI{c}{P} for \CI{c:C}{P} with $C$ the type of $c$.
+
+\paragraph{Example.}
+The following term in concrete syntax:
+\begin{verbatim}
+match t as l return P' with
+| nil _ => t1
+| cons _ hd tl => t2
+end
+\end{verbatim}
+can be represented in abstract syntax as $$\Case{P}{t}{f_1\,|\,f_2}$$
+where
+\begin{eqnarray*}
+ P & = & \lambda~l~.~P^\prime\\
+ f_1 & = & t_1\\
+ f_2 & = & \lambda~(hd:\nat)~.~\lambda~(tl:\List~\nat)~.~t_2
+\end{eqnarray*}
+According to the definition:
+\begin{latexonly}\vskip.5em\noindent\end{latexonly}%
+\begin{htmlonly}
+
+\end{htmlonly}
+$ \CI{(\Nil~\nat)}{P} \equiv \CI{(\Nil~\nat) : (\List~\nat)}{P} \equiv (P~(\Nil~\nat))$
+\begin{latexonly}\vskip.5em\noindent\end{latexonly}%
+\begin{htmlonly}
+
+\end{htmlonly}
+$ \CI{(\cons~\nat)}{P}
+ \equiv\CI{(\cons~\nat) : (\nat\ra\List~\nat\ra\List~\nat)}{P} \equiv\\
+ \equiv\forall n:\nat, \CI{(\cons~\nat~n) : \List~\nat\ra\List~\nat)}{P} \equiv\\
+ \equiv\forall n:\nat, \forall l:\List~\nat, \CI{(\cons~\nat~n~l) : \List~\nat)}{P} \equiv\\
+\equiv\forall n:\nat, \forall l:\List~\nat,(P~(\cons~\nat~n~l))$.
+\begin{latexonly}\vskip.5em\noindent\end{latexonly}%
+\begin{htmlonly}
+
+\end{htmlonly}
+Given some $P$, then \CI{(\Nil~\nat)}{P} represents the expected type of $f_1$, and
+\CI{(\cons~\nat)}{P} represents the expected type of $f_2$.
+
+\paragraph{Typing rule.}
+
+Our very general destructor for inductive definition enjoys the
+following typing rule
+% , where we write
+% \[
+% \Case{P}{c}{[x_{11}:T_{11}]\ldots[x_{1p_1}:T_{1p_1}]g_1\ldots
+% [x_{n1}:T_{n1}]\ldots[x_{np_n}:T_{np_n}]g_n}
+% \]
+% for
+% \[
+% \Case{P}{c}{(c_1~x_{11}~...~x_{1p_1}) \Ra g_1 ~|~\ldots~|~
+% (c_n~x_{n1}...x_{np_n}) \Ra g_n }
+% \]
+
+\begin{description}
+\item[match] \label{elimdep} \index{Typing rules!match}
+\inference{
+\frac{\WTEG{c}{(I~q_1\ldots q_r~t_1\ldots t_s)}~~
+ \WTEG{P}{B}~~\compat{(I~q_1\ldots q_r)}{B}
+ ~~
+(\WTEG{f_i}{\CI{(c_{p_i}~q_1\ldots q_r)}{P}})_{i=1\ldots l}}
+{\WTEG{\Case{P}{c}{f_1|\ldots |f_l}}{(P\ t_1\ldots t_s\ c)}}}%\\[3mm]
+
+provided $I$ is an inductive type in a definition
+\Ind{}{r}{\Gamma_I}{\Gamma_C} with
+$\Gamma_C = [c_1:C_1;\ldots;c_n:C_n]$ and $c_{p_1}\ldots c_{p_l}$ are the
+only constructors of $I$.
+\end{description}
+
+\paragraph{Example.}
+
+Below is a typing rule for the term shown in the previous example:
+\inference{
+ \frac{%
+ \WTEG{t}{(\List~\nat)}~~~~%
+ \WTEG{P}{B}~~~~%
+ \compat{(\List~\nat)}{B}~~~~%
+ \WTEG{f_1}{\CI{(\Nil~\nat)}{P}}~~~~%
+ \WTEG{f_2}{\CI{(\cons~\nat)}{P}}%
+ }
+{\WTEG{\Case{P}{t}{f_1|f_2}}{(P~t)}}}
+
+\paragraph[Definition of $\iota$-reduction.]{Definition of $\iota$-reduction.\label{iotared}
+\index{iota-reduction@$\iota$-reduction}}
+We still have to define the $\iota$-reduction in the general case.
+
+A $\iota$-redex is a term of the following form:
+\[\Case{P}{(c_{p_i}~q_1\ldots q_r~a_1\ldots a_m)}{f_1|\ldots |
+ f_l}\]
+with $c_{p_i}$ the $i$-th constructor of the inductive type $I$ with $r$
+parameters.
+
+The $\iota$-contraction of this term is $(f_i~a_1\ldots a_m)$ leading
+to the general reduction rule:
+\[ \Case{P}{(c_{p_i}~q_1\ldots q_r~a_1\ldots a_m)}{f_1|\ldots |
+ f_n} \triangleright_{\iota} (f_i~a_1\ldots a_m) \]
+
+\subsection[Fixpoint definitions]{Fixpoint definitions\label{Fix-term} \index{Fix@{\tt Fix}}}
+The second operator for elimination is fixpoint definition.
+This fixpoint may involve several mutually recursive definitions.
+The basic concrete syntax for a recursive set of mutually recursive
+declarations is (with $\Gamma_i$ contexts):
+\[\kw{fix}~f_1 (\Gamma_1) :A_1:=t_1~\kw{with} \ldots \kw{with}~ f_n
+(\Gamma_n) :A_n:=t_n\]
+The terms are obtained by projections from this set of declarations
+and are written
+\[\kw{fix}~f_1 (\Gamma_1) :A_1:=t_1~\kw{with} \ldots \kw{with}~ f_n
+(\Gamma_n) :A_n:=t_n~\kw{for}~f_i\]
+In the inference rules, we represent such a
+term by
+\[\Fix{f_i}{f_1:A_1':=t_1' \ldots f_n:A_n':=t_n'}\]
+with $t_i'$ (resp. $A_i'$) representing the term $t_i$ abstracted
+(resp. generalized) with
+respect to the bindings in the context $\Gamma_i$, namely
+$t_i'=\lb \Gamma_i \mto t_i$ and $A_i'=\forall \Gamma_i, A_i$.
+
+\subsubsection{Typing rule}
+The typing rule is the expected one for a fixpoint.
+
+\begin{description}
+\item[Fix] \index{Typing rules!Fix}
+\inference{\frac{(\WTEG{A_i}{s_i})_{i=1\ldots n}~~~~
+ (\WTE{\Gamma,f_1:A_1,\ldots,f_n:A_n}{t_i}{A_i})_{i=1\ldots n}}
+ {\WTEG{\Fix{f_i}{f_1:A_1:=t_1 \ldots f_n:A_n:=t_n}}{A_i}}}
+\end{description}
+%
+Any fixpoint definition cannot be accepted because non-normalizing terms
+allow proofs of absurdity.
+%
+The basic scheme of recursion that should be allowed is the one needed for
+defining primitive
+recursive functionals. In that case the fixpoint enjoys a special
+syntactic restriction, namely one of the arguments belongs to an
+inductive type, the function starts with a case analysis and recursive
+calls are done on variables coming from patterns and representing subterms.
+%
+For instance in the case of natural numbers, a proof of the induction
+principle of type
+\[\forall P:\nat\ra\Prop, (P~\nO)\ra(\forall n:\nat, (P~n)\ra(P~(\nS~n)))\ra
+\forall n:\nat, (P~n)\]
+can be represented by the term:
+\[\begin{array}{l}
+\lb P:\nat\ra\Prop\mto\lb f:(P~\nO)\mto \lb g:(\forall n:\nat,
+(P~n)\ra(P~(\nS~n))) \mto\\
+\Fix{h}{h:\forall n:\nat, (P~n):=\lb n:\nat\mto \Case{P}{n}{f~|~\lb
+ p:\nat\mto (g~p~(h~p))}}
+\end{array}
+\]
+%
+Before accepting a fixpoint definition as being correctly typed, we
+check that the definition is ``guarded''. A precise analysis of this
+notion can be found in~\cite{Gim94}.
+%
+The first stage is to precise on which argument the fixpoint will be
+decreasing. The type of this argument should be an inductive
+definition.
+%
+For doing this, the syntax of fixpoints is extended and becomes
+ \[\Fix{f_i}{f_1/k_1:A_1:=t_1 \ldots f_n/k_n:A_n:=t_n}\]
+where $k_i$ are positive integers.
+Each $k_i$ represents the index of pararameter of $f_i$, on which $f_i$ is decreasing.
+Each $A_i$ should be a type (reducible to a term) starting with at least
+$k_i$ products $\forall y_1:B_1,\ldots \forall y_{k_i}:B_{k_i}, A'_i$
+and $B_{k_i}$ an is unductive type.
+
+Now in the definition $t_i$, if $f_j$ occurs then it should be applied
+to at least $k_j$ arguments and the $k_j$-th argument should be
+syntactically recognized as structurally smaller than $y_{k_i}$
+
+
+The definition of being structurally smaller is a bit technical.
+One needs first to define the notion of
+{\em recursive arguments of a constructor}\index{Recursive arguments}.
+For an inductive definition \Ind{}{r}{\Gamma_I}{\Gamma_C},
+if the type of a constructor $c$ has the form
+$\forall p_1:P_1,\ldots \forall p_r:P_r,
+\forall x_1:T_1, \ldots \forall x_r:T_r, (I_j~p_1\ldots
+p_r~t_1\ldots t_s)$, then the recursive arguments will correspond to $T_i$ in
+which one of the $I_l$ occurs.
+
+The main rules for being structurally smaller are the following:\\
+Given a variable $y$ of type an inductive
+definition in a declaration
+\Ind{}{r}{\Gamma_I}{\Gamma_C}
+where $\Gamma_I$ is $[I_1:A_1;\ldots;I_k:A_k]$, and $\Gamma_C$ is
+ $[c_1:C_1;\ldots;c_n:C_n]$.
+The terms structurally smaller than $y$ are:
+\begin{itemize}
+\item $(t~u)$ and $\lb x:u \mto t$ when $t$ is structurally smaller than $y$.
+\item \Case{P}{c}{f_1\ldots f_n} when each $f_i$ is structurally
+ smaller than $y$. \\
+ If $c$ is $y$ or is structurally smaller than $y$, its type is an inductive
+ definition $I_p$ part of the inductive
+ declaration corresponding to $y$.
+ Each $f_i$ corresponds to a type of constructor $C_q \equiv
+ \forall p_1:P_1,\ldots,\forall p_r:P_r, \forall y_1:B_1, \ldots \forall y_k:B_k, (I~a_1\ldots a_k)$
+ and can consequently be
+ written $\lb y_1:B'_1\mto \ldots \lb y_k:B'_k\mto g_i$.
+ ($B'_i$ is obtained from $B_i$ by substituting parameters variables)
+ the variables $y_j$ occurring
+ in $g_i$ corresponding to recursive arguments $B_i$ (the ones in
+ which one of the $I_l$ occurs) are structurally smaller than $y$.
+\end{itemize}
+The following definitions are correct, we enter them using the
+{\tt Fixpoint} command as described in Section~\ref{Fixpoint} and show
+the internal representation.
+\begin{coq_example}
+Fixpoint plus (n m:nat) {struct n} : nat :=
+ match n with
+ | O => m
+ | S p => S (plus p m)
+ end.
+Print plus.
+Fixpoint lgth (A:Set) (l:list A) {struct l} : nat :=
+ match l with
+ | nil _ => O
+ | cons _ a l' => S (lgth A l')
+ end.
+Print lgth.
+Fixpoint sizet (t:tree) : nat := let (f) := t in S (sizef f)
+ with sizef (f:forest) : nat :=
+ match f with
+ | emptyf => O
+ | consf t f => plus (sizet t) (sizef f)
+ end.
+Print sizet.
+\end{coq_example}
+
+
+\subsubsection[Reduction rule]{Reduction rule\index{iota-reduction@$\iota$-reduction}}
+Let $F$ be the set of declarations: $f_1/k_1:A_1:=t_1 \ldots
+f_n/k_n:A_n:=t_n$.
+The reduction for fixpoints is:
+\[ (\Fix{f_i}{F}~a_1\ldots
+a_{k_i}) \triangleright_{\iota} \substs{t_i}{f_k}{\Fix{f_k}{F}}{k=1\ldots n}
+~a_1\ldots a_{k_i}\]
+when $a_{k_i}$ starts with a constructor.
+This last restriction is needed in order to keep strong normalization
+and corresponds to the reduction for primitive recursive operators.
+%
+The following reductions are now possible:
+\def\plus{\mathsf{plus}}
+\def\tri{\triangleright_\iota}
+\begin{eqnarray*}
+ \plus~(\nS~(\nS~\nO))~(\nS~\nO) & \tri & \nS~(\plus~(\nS~\nO)~(\nS~\nO))\\
+ & \tri & \nS~(\nS~(\plus~\nO~(\nS~\nO)))\\
+ & \tri & \nS~(\nS~(\nS~\nO))\\
+\end{eqnarray*}
+
+% La disparition de Program devrait rendre la construction Match obsolete
+% \subsubsection{The {\tt Match \ldots with \ldots end} expression}
+% \label{Matchexpr}
+% %\paragraph{A unary {\tt Match\ldots with \ldots end}.}
+% \index{Match...with...end@{\tt Match \ldots with \ldots end}}
+% The {\tt Match} operator which was a primitive notion in older
+% presentations of the Calculus of Inductive Constructions is now just a
+% macro definition which generates the good combination of {\tt Case}
+% and {\tt Fix} operators in order to generate an operator for primitive
+% recursive definitions. It always considers an inductive definition as
+% a single inductive definition.
+
+% The following examples illustrates this feature.
+% \begin{coq_example}
+% Definition nat_pr : (C:Set)C->(nat->C->C)->nat->C
+% :=[C,x,g,n]Match n with x g end.
+% Print nat_pr.
+% \end{coq_example}
+% \begin{coq_example}
+% Definition forest_pr
+% : (C:Set)C->(tree->forest->C->C)->forest->C
+% := [C,x,g,n]Match n with x g end.
+% \end{coq_example}
+
+% Cet exemple faisait error (HH le 12/12/96), j'ai change pour une
+% version plus simple
+%\begin{coq_example}
+%Definition forest_pr
+% : (P:forest->Set)(P emptyf)->((t:tree)(f:forest)(P f)->(P (consf t f)))
+% ->(f:forest)(P f)
+% := [C,x,g,n]Match n with x g end.
+%\end{coq_example}
+
+\subsubsection{Mutual induction}
+
+The principles of mutual induction can be automatically generated
+using the {\tt Scheme} command described in Section~\ref{Scheme}.
+
+\section{Admissible rules for global environments}
+
+From the original rules of the type system, one can show the
+admissibility of rules which change the local context of definition of
+objects in the global environment. We show here the admissible rules
+that are used used in the discharge mechanism at the end of a section.
+
+% This is obsolete: Abstraction over defined constants actually uses a
+% let-in since there are let-ins in Coq
+
+%% \paragraph{Mechanism of substitution.}
+
+%% One rule which can be proved valid, is to replace a term $c$ by its
+%% value in the global environment. As we defined the substitution of a term for
+%% a variable in a term, one can define the substitution of a term for a
+%% constant. One easily extends this substitution to local contexts and global
+%% environments.
+
+%% \paragraph{Substitution Property:}
+%% \inference{\frac{\WF{E;c:=t:T; E'}{\Gamma}}
+%% {\WF{E; \subst{E'}{c}{t}}{\subst{\Gamma}{c}{t}}}}
+
+\paragraph{Abstraction.}
+
+One can modify a global declaration by generalizing it over a
+previously assumed constant $c$. For doing that, we need to modify the
+reference to the global declaration in the subsequent global
+environment and local context by explicitly applying this constant to
+the constant $c'$.
+
+Below, if $\Gamma$ is a context of the form
+$[y_1:A_1;\ldots;y_n:A_n]$, we write $\forall
+x:U,\subst{\Gamma}{c}{x}$ to mean
+$[y_1:\forall~x:U,\subst{A_1}{c}{x};\ldots;y_n:\forall~x:U,\subst{A_n}{c}{x}]$
+and
+$\subst{E}{|\Gamma|}{|\Gamma|c}$.
+to mean the parallel substitution
+$\subst{\subst{E}{y_1}{(y_1~c)}\ldots}{y_n}{(y_n~c)}$.
+
+\paragraph{First abstracting property:}
+ \inference{\frac{\WF{E;c:U;E';c':=t:T;E''}{\Gamma}}
+ {\WF{E;c:U;E';c':=\lb x:U\mto \subst{t}{c}{x}:\forall~x:U,\subst{T}{c}{x};
+ \subst{E''}{c'}{(c'~c)}}{\subst{\Gamma}{c}{(c~c')}}}}
+
+ \inference{\frac{\WF{E;c:U;E';c':T;E''}{\Gamma}}
+ {\WF{E;c:U;E';c':\forall~x:U,\subst{T}{c}{x};
+ \subst{E''}{c'}{(c'~c)}}{\subst{\Gamma}{c}{(c~c')}}}}
+
+ \inference{\frac{\WF{E;c:U;E';\Ind{}{p}{\Gamma_I}{\Gamma_C};E''}{\Gamma}}
+ {\WFTWOLINES{E;c:U;E';\Ind{}{p+1}{\forall x:U,\subst{\Gamma_I}{c}{x}}{\forall x:U,\subst{\Gamma_C}{c}{x}};\subst{E''}{|\Gamma_I,\Gamma_C|}{|\Gamma_I,\Gamma_C|~c}}{\subst{\Gamma}{|\Gamma_I,\Gamma_C|}{|\Gamma_I,\Gamma_C|~c}}}}
+%
+One can similarly modify a global declaration by generalizing it over
+a previously defined constant~$c'$. Below, if $\Gamma$ is a context
+of the form $[y_1:A_1;\ldots;y_n:A_n]$, we write $
+\subst{\Gamma}{c}{u}$ to mean
+$[y_1:\subst{A_1}{c}{u};\ldots;y_n:\subst{A_n}{c}{u}]$.
+
+\paragraph{Second abstracting property:}
+ \inference{\frac{\WF{E;c:=u:U;E';c':=t:T;E''}{\Gamma}}
+ {\WF{E;c:=u:U;E';c':=(\letin{x}{u:U}{\subst{t}{c}{x}}):\subst{T}{c}{u};E''}{\Gamma}}}
+
+ \inference{\frac{\WF{E;c:=u:U;E';c':T;E''}{\Gamma}}
+ {\WF{E;c:=u:U;E';c':\subst{T}{c}{u};E''}{\Gamma}}}
+
+ \inference{\frac{\WF{E;c:=u:U;E';\Ind{}{p}{\Gamma_I}{\Gamma_C};E''}{\Gamma}}
+ {\WF{E;c:=u:U;E';\Ind{}{p}{\subst{\Gamma_I}{c}{u}}{\subst{\Gamma_C}{c}{u}};E''}{\Gamma}}}
+
+\paragraph{Pruning the local context.}
+If one abstracts or substitutes constants with the above rules then it
+may happen that some declared or defined constant does not occur any
+more in the subsequent global environment and in the local context. One can
+consequently derive the following property.
+
+\paragraph{First pruning property:}
+\inference{\frac{\WF{E;c:U;E'}{\Gamma} \qquad c \mbox{ does not occur in $E'$ and $\Gamma$}}
+ {\WF{E;E'}{\Gamma}}}
+
+\paragraph{Second pruning property:}
+\inference{\frac{\WF{E;c:=u:U;E'}{\Gamma} \qquad c \mbox{ does not occur in $E'$ and $\Gamma$}}
+ {\WF{E;E'}{\Gamma}}}
+
+\section{Co-inductive types}
+The implementation contains also co-inductive definitions, which are
+types inhabited by infinite objects.
+More information on co-inductive definitions can be found
+in~\cite{Gimenez95b,Gim98,GimCas05}.
+%They are described in Chapter~\ref{Co-inductives}.
+
+\section[The Calculus of Inductive Construction with
+ impredicative \Set]{The Calculus of Inductive Construction with
+ impredicative \Set\label{impredicativity}}
+
+\Coq{} can be used as a type-checker for the
+Calculus of Inductive Constructions with an impredicative sort \Set{}
+by using the compiler option \texttt{-impredicative-set}.
+%
+For example, using the ordinary \texttt{coqtop} command, the following
+is rejected.
+% (** This example should fail *******************************
+% Error: The term forall X:Set, X -> X has type Type
+% while it is expected to have type Set ***)
+\begin{coq_example}
+Fail Definition id: Set := forall X:Set,X->X.
+\end{coq_example}
+while it will type-check, if one uses instead the \texttt{coqtop
+ -impredicative-set} command.
+
+The major change in the theory concerns the rule for product formation
+in the sort \Set, which is extended to a domain in any sort:
+\begin{description}
+\item [Prod] \index{Typing rules!Prod (impredicative Set)}
+\inference{\frac{\WTEG{T}{s}~~~~s \in \Sort~~~~~~
+ \WTE{\Gamma::(x:T)}{U}{\Set}}
+ { \WTEG{\forall~x:T,U}{\Set}}}
+\end{description}
+This extension has consequences on the inductive definitions which are
+allowed.
+In the impredicative system, one can build so-called {\em large inductive
+ definitions} like the example of second-order existential
+quantifier (\texttt{exSet}).
+
+There should be restrictions on the eliminations which can be
+performed on such definitions. The eliminations rules in the
+impredicative system for sort \Set{} become:
+\begin{description}
+\item[\Set] \inference{\frac{s \in
+ \{\Prop, \Set\}}{\compat{I:\Set}{I\ra s}}
+~~~~\frac{I \mbox{~is a small inductive definition}~~~~s \in
+ \{\Type(i)\}}
+ {\compat{I:\Set}{I\ra s}}}
+\end{description}
+
+%%% Local Variables:
+%%% mode: latex
+%%% TeX-master: "Reference-Manual"
+%%% End:
+
+