aboutsummaryrefslogtreecommitdiffhomepage
diff options
context:
space:
mode:
authorGravatar Guillaume Melquiond <guillaume.melquiond@inria.fr>2015-01-08 16:27:45 +0100
committerGravatar Guillaume Melquiond <guillaume.melquiond@inria.fr>2015-01-08 16:27:58 +0100
commit448bf4529c5766e98367345076d00e64e25db7bf (patch)
treee0f52bf0eeeafb0289d0d527eaa4372b0fa71ba5
parent7b95055821aab5e6d03a5b17dd6257139181f0a8 (diff)
Fix some documentation typos.
-rw-r--r--doc/refman/AsyncProofs.tex199
-rw-r--r--doc/refman/Cases.tex2
-rw-r--r--doc/refman/Classes.tex4
-rw-r--r--doc/refman/Extraction.tex76
-rw-r--r--doc/refman/Setoid.tex13
-rw-r--r--doc/refman/Universes.tex31
6 files changed, 166 insertions, 159 deletions
diff --git a/doc/refman/AsyncProofs.tex b/doc/refman/AsyncProofs.tex
index bef746729..797bf2434 100644
--- a/doc/refman/AsyncProofs.tex
+++ b/doc/refman/AsyncProofs.tex
@@ -6,162 +6,167 @@
This chapter explains how proofs can be asynchronously processed by Coq.
This feature improves the reactiveness of the system when used in interactive
-mode via CoqIDE. In addition to that it allows Coq to take advantage of
+mode via CoqIDE. In addition to that, it allows Coq to take advantage of
parallel hardware when used as a batch compiler by decoupling the checking
of statements and definitions from the construction and checking of proofs
objects.
This feature is designed to help dealing with huge libraries of theorems
-characterized by long proofs. At the current state it may not be beneficial
-on small set of short files.
+characterized by long proofs. In the current state, it may not be beneficial
+on small sets of short files.
This feature has some technical limitations that may make it unsuitable for
some use cases.
-For example in interactive mode errors coming from the kernel of Coq are
-signalled late. The most notable type of errors belonging to this category are
-universes inconsistency or unsatisfied guardedness conditions for fixpoints
+For example, in interactive mode, errors coming from the kernel of Coq are
+signaled late. The most notable type of errors belonging to this category are
+universe inconsistencies or unsatisfied guardedness conditions for fixpoints
built using tactics.
-Last, at the time of writing, only opaque proofs (ending with $Qed$) can be
+Last, at the time of writing, only opaque proofs (ending with \texttt{Qed}) can be
processed asynchronously.
\subsection{Proof annotations}
To process a proof asynchronously Coq needs to know the precise statement
of the theorem without looking at the proof. This requires some annotations
-if the theorem is proved inside a $Section$ (see Section~\ref{Section}).
+if the theorem is proved inside a \texttt{Section} (see Section~\ref{Section}).
-When a section is ended Coq looks at the proof object to decide which
+When a section ends, Coq looks at the proof object to decide which
section variables are actually used and hence have to be quantified in the
-statement of the theorem. To make the construction of the proofs not
-mandatory for ending a section one can start each proof with the
-$Proof using$ command~\ref{ProofUsing} that declares the subset of section
-variables the theorem uses.
+statement of the theorem. To avoid making the construction of proofs
+mandatory when ending a section, one can start each proof with the
+\texttt{Proof using} command (Section~\ref{ProofUsing}) that declares which
+section variables the theorem uses.
-The presence of $Proof using$ is mandatory to process proofs asynchronously
-in interactive mode.
+The presence of \texttt{Proof using} is needed to process proofs
+asynchronously in interactive mode.
-It is not strictly mandatory in batch mode if it is not the first time the
-file is compiled and if the file itself did not change. In case the
-proof does not begin with $Proof using$ the system records in an auxiliary
-file, produced along with the $.vo$ file, the list of section variable used.
+It is not strictly mandatory in batch mode if it is not the first time
+the file is compiled and if the file itself did not change. When the
+proof does not begin with \texttt{Proof using}, the system records in an
+auxiliary file, produced along with the \texttt{.vo} file, the list of
+section variables used.
\subsubsection{Automatic suggestion of proof annotations}
-The command $Set Suggest Proof Using$ makes Coq suggest, when a $Qed$
-command is processed, a correct proof annotation. It is up to the user
-to modify the proof script adding the proof annotation.
+The command \texttt{Set Suggest Proof Using} makes Coq suggest, when a
+\texttt{Qed} command is processed, a correct proof annotation. It is up
+to the user to modify the proof script accordingly.
\subsection{Interactive mode}
At the time of writing the only user interface supporting asynchronous proof
-processing is CoqIDE.
+processing is CoqIDE.
-When CoqIDE is started two Coq processes are created. The master one follows
-the user, giving feedback as soon as possible by skipping proofs, that are
+When CoqIDE is started, two Coq processes are created. The master one follows
+the user, giving feedback as soon as possible by skipping proofs, which are
delegated to the worker process. The worker process, whose state can be seen
by clicking on the button in the lower right corner of the main CoqIDE window,
asynchronously processes the proofs. If a proof contains an error, it is
reported in red in the label of the very same button, that can also be used to
see the list of errors and jump to the corresponding line.
-If a proof is processed asynchronously the corresponding $Qed$ command is
-is colored using a color that is lighter than usual. This signals that
+If a proof is processed asynchronously the corresponding \texttt{Qed} command
+is colored using a lighter color that usual. This signals that
the proof has been delegated to a worker process (or will be processed
-lazily if the $-async-proofs lazy$ option is used). Once finished the
+lazily if the \texttt{-async-proofs lazy} option is used). Once finished, the
worker process will provide the proof object, but this will not be
-automatically checked by the kernel of the main process. To force
-the kernel to check all proof objects one has to click the button
-with the gears.
-
-Only when the gears button is pressed all universe constraints are checked.
+automatically checked by the kernel of the main process. To force
+the kernel to check all the proof objects, one has to click the button
+with the gears. Only then are all the universe constraints checked.
\subsubsection{Caveats}
+
The number of worker processes can be increased by passing CoqIDE the
-$-async-proofs-j 2$ flag. Note that the memory consumption increases
-too, since each worker has to be an exact copy of the master process
-and requires the same amount of memory. Also note that the master process
-has to both respond to the user and manage the workers, hence increasing
-their number may slow down the master process.
+\texttt{-async-proofs-j $n$} flag. Note that the memory consumption
+increases too, since each worker requires the same amount of memory as
+the master process. Also note that increasing the number of workers may
+reduce the reactivity of the master process to user commands.
-To disable this feature it is enough to pass the $-async-proofs off$ flag to
+To disable this feature, one can pass the \texttt{-async-proofs off} flag to
CoqIDE.
\subsection{Batch mode}
-When Coq is used as a batch compiler by running $coqc$ or $coqtop -compile$
-it produces a $.vo$ file for each $.v$ file. A $.vo$ file contains, among
-other things, theorems statements and proofs. Hence to produce a $.vo$
-Coq need to process all the proofs of the $.v$ file.
+When Coq is used as a batch compiler by running \texttt{coqc} or
+\texttt{coqtop -compile}, it produces a \texttt{.vo} file for each
+\texttt{.v} file. A \texttt{.vo} file contains, among other things,
+theorems statements and proofs. Hence to produce a \texttt{.vo} Coq need
+to process all the proofs of the \texttt{.v} file.
The asynchronous processing of proofs can decouple the generation of a
-compiled file (like the $.vo$ one) that can be $Required$ from the generation
-and checking of the proof objects. The $-quick$ flag can be passed to
-$coqc$ or $coqtop$ to produce, quickly, $.vio$ files. Alternatively, if
-the $Makefile$ one uses was produced by $coq\_makefile$ the $quick$
-target can be used to compile all files using the $-quick$ flag.
-
-A $.vio$ file can be $Required$ exactly as a $.vo$ file but: 1) proofs are
-not available (the $Print$ command produces an error); 2) some universe
-constraints are missing, hence universes inconsistencies may not be
-discovered. A $.vio$ file does not contain proof objects, but proof tasks,
+compiled file (like the \texttt{.vo} one) that can be loaded by
+\texttt{Require} from the generation and checking of the proof objects.
+The \texttt{-quick} flag can be passed to \texttt{coqc} or
+\texttt{coqtop} to produce, quickly, \texttt{.vio} files. Alternatively,
+when using a \texttt{Makefile} produced by \texttt{coq\_makefile}, the
+\texttt{quick} target can be used to compile all files using the
+\texttt{-quick} flag.
+
+A \texttt{.vio} file can be loaded using \texttt{Require} exactly as a
+\texttt{.vo} file but proofs will not be available (the \texttt{Print}
+command produces an error). Moreover, some universe constraints might be
+missing, so universes inconsistencies might go unnoticed. A
+\texttt{.vio} file does not contain proof objects, but proof tasks,
i.e. what a worker process can transform into a proof object.
-Compiling a set of files with the $-quick$ flag allows one to work,
-interactively, on any file without waiting for all proofs to be checked.
-
-While one works interactively, he can fully check all $.v$ files by
-running $coqc$ as usual.
-
-Alternatively one can turn each $.vio$ into the corresponding $.vo$.
-All $.vio$ files can be processed in parallel, hence this alternative
-may be faster. The command $coqtop -schedule-vio2vo 2 a b c$
-can be used to obtain a good scheduling for 2 workers to produce
-$a.vo$, $b.vo$ and $c.vo$. If the $Makefile$ one uses was produced
-by $coq\_makefile$ the $vio2vo$ target can be used for that.
-The variable $J$ should be set to the number of workers, like
-in $make vio2vo J=2$. The only caveat is that, while the $.vo$ file
-obtained from $.vio$ files are complete (they contain all proof terms
-and universe constraints) the satisfiability of all universe constraints
-has not been checked globally (they are checked to be consistent for
-every single proof). Constraints will be checked when these $.vo$ files
-are (recursively) loaded with $Require$.
-
-There is an extra, possibly even faster, alternative: just check
-the proof tasks store in $.vio$ files without producing the $.vo$ files.
-This is possibly faster because all proof tasks are independent, hence
-one can further partition the job to be done between workers.
-The $coqtop -schedule-vio-checking 6 a b c$ command can be used to obtain
-a good scheduling for 6 workers to check all proof tasks of $a.vio$, $b.vio$ and
-$c.vio$. Auxiliary files are used to predict how long a proof task will take,
-assuming it will take the same amount of time it took last time.
-If the $Makefile$ one uses was produced by $coq\_makefile$ the $checkproofs$
-target can be used to check all $.vio$ files. The variable $J$ should be
-set to the number of workers, like in $make checkproofs J=6$.
-Checking all proof tasks does not guarantee the same degree of safety
-that producing a $.vo$ file gives. In particular universe constraints
-are checked to be consistent for every single proof, but not globally.
-Hence this compilation mode is only useful for quick regression testing
-and on developments not making heavy use of the $Type$ hierarchy.
+Compiling a set of files with the \texttt{-quick} flag allows one to work,
+interactively, on any file without waiting for all the proofs to be checked.
+
+When working interactively, one can fully check all the \texttt{.v} files by
+running \texttt{coqc} as usual.
+
+Alternatively one can turn each \texttt{.vio} into the corresponding
+\texttt{.vo}. All \texttt{.vio} files can be processed in parallel,
+hence this alternative might be faster. The command \texttt{coqtop
+ -schedule-vio2vo 2 a b c} can be used to obtain a good scheduling for 2
+workers to produce \texttt{a.vo}, \texttt{b.vo}, and \texttt{c.vo}. When
+using a \texttt{Makefile} produced by \texttt{coq\_makefile}, the
+\texttt{vio2vo} target can be used for that purpose. Variable \texttt{J}
+should be set to the number of workers, e.g. \texttt{make vio2vo J=2}.
+The only caveat is that, while the \texttt{.vo} files obtained from
+\texttt{.vio} files are complete (they contain all proof terms and
+universe constraints), the satisfiability of all universe constraints has
+not been checked globally (they are checked to be consistent for every
+single proof). Constraints will be checked when these \texttt{.vo} files
+are (recursively) loaded with \texttt{Require}.
+
+There is an extra, possibly even faster, alternative: just check the
+proof tasks stored in \texttt{.vio} files without producing the
+\texttt{.vo} files. This is possibly faster because all the proof tasks
+are independent, hence one can further partition the job to be done
+between workers. The \texttt{coqtop -schedule-vio-checking 6 a b c}
+command can be used to obtain a good scheduling for 6 workers to check
+all the proof tasks of \texttt{a.vio}, \texttt{b.vio}, and
+\texttt{c.vio}. Auxiliary files are used to predict how long a proof task
+will take, assuming it will take the same amount of time it took last
+time. When using a \texttt{Makefile} produced by \texttt{coq\_makefile},
+the \texttt{checkproofs} target can be used to check all \texttt{.vio}
+files. Variable \texttt{J} should be set to the number of workers,
+e.g. \texttt{make checkproofs J=6}. As when converting \texttt{.vio}
+files to \texttt{.vo} files, universe constraints are not checked to be
+globally consistent. Hence this compilation mode is only useful for quick
+regression testing and on developments not making heavy use of the $Type$
+hierarchy.
\subsection{Limiting the number of parallel workers}
\label{coqworkmgr}
Many Coq processes may run on the same computer, and each of them may start
many additional worker processes.
-The $coqworkmgr$ utility lets one limit the number of workers, globally.
+The \texttt{coqworkmgr} utility lets one limit the number of workers, globally.
-The utility accepts the $-j$ argument to specify the maximum number of
-workers (defaults to 2). $coqworkmgr$ automatically starts in the
+The utility accepts the \texttt{-j} argument to specify the maximum number of
+workers (defaults to 2). \texttt{coqworkmgr} automatically starts in the
background and prints an environment variable assignment like
-$COQWORKMGR\_SOCKET=localhost:45634$. The user must set this variable in
-all the shells from which he will start Coq processes. If one uses just
-one terminal running the bash shell, then $export `coqworkmgr -j 4`$ will
+\texttt{COQWORKMGR\_SOCKET=localhost:45634}. The user must set this variable in
+all the shells from which Coq processes will be started. If one uses just
+one terminal running the bash shell, then \texttt{export `coqworkmgr -j 4`} will
do the job.
-After that all Coq processes, like $coqide$ and $coqc$ will honor such
-limit, globally.
+After that, all Coq processes, e.g. \texttt{coqide} and \texttt{coqc},
+will honor the limit, globally.
%%% Local Variables:
diff --git a/doc/refman/Cases.tex b/doc/refman/Cases.tex
index 2552feef9..7e01edeb0 100644
--- a/doc/refman/Cases.tex
+++ b/doc/refman/Cases.tex
@@ -728,7 +728,7 @@ The elimination predicate provided to \texttt{match} has not the
% several \texttt{match} with {\em simple patterns}.
\item {\tt Unable to infer a match predicate\\
- Either there is a type incompatiblity or the problem involves\\
+ Either there is a type incompatibility or the problem involves\\
dependencies}
There is a type mismatch between the different branches.
diff --git a/doc/refman/Classes.tex b/doc/refman/Classes.tex
index 1d134269f..f683e4543 100644
--- a/doc/refman/Classes.tex
+++ b/doc/refman/Classes.tex
@@ -127,7 +127,7 @@ particular support for type classes:
class arguments, making derived functions as easy to use as class
methods. In the example above, \texttt{A} and \texttt{eqa} should be
set maximally implicit.
-\item They support implicit quantification on partialy applied type
+\item They support implicit quantification on partially applied type
classes (\S \ref{implicit-generalization}).
Any argument not given as part of a type class binder will be
automatically generalized.
@@ -416,7 +416,7 @@ based on a variant of eauto. The flags semantics are:
\subsection{\tt Set Refine Instance Mode}
\comindex{Set Refine Instance Mode}\comindex{Unset Refine Instance Mode}
-The option {\tt Refine Instance Mode} allows to switch the behaviour of instance
+The option {\tt Refine Instance Mode} allows to switch the behavior of instance
declarations made through the {\tt Instance} command.
\begin{itemize}
\item When it is on (the default), instances that have unsolved holes in their
diff --git a/doc/refman/Extraction.tex b/doc/refman/Extraction.tex
index 86c16f635..e9664f791 100644
--- a/doc/refman/Extraction.tex
+++ b/doc/refman/Extraction.tex
@@ -10,14 +10,14 @@ functional languages available as output are currently \ocaml{},
\textsc{Haskell} and \textsc{Scheme}. In the following, ``ML'' will
be used (abusively) to refer to any of the three.
-\paragraph{Differences with old versions.}
-The current extraction mechanism is new for version 7.0 of {\Coq}.
-In particular, the \FW\ toplevel used as an intermediate step between
-\Coq\ and ML has been withdrawn. It is also not possible
-any more to import ML objects in this \FW\ toplevel.
-The current mechanism also differs from
-the one in previous versions of \Coq: there is no more
-an explicit toplevel for the language (formerly called \textsc{Fml}).
+%% \paragraph{Differences with old versions.}
+%% The current extraction mechanism is new for version 7.0 of {\Coq}.
+%% In particular, the \FW\ toplevel used as an intermediate step between
+%% \Coq\ and ML has been withdrawn. It is also not possible
+%% any more to import ML objects in this \FW\ toplevel.
+%% The current mechanism also differs from
+%% the one in previous versions of \Coq: there is no more
+%% an explicit toplevel for the language (formerly called \textsc{Fml}).
\asection{Generating ML code}
\comindex{Extraction}
@@ -31,9 +31,9 @@ extraction. They both display extracted term(s) inside \Coq.
\begin{description}
\item {\tt Extraction \qualid.} ~\par
- Extracts one constant or module in the \Coq\ toplevel.
+ Extraction of a constant or module in the \Coq\ toplevel.
-\item {\tt Recursive Extraction \qualid$_1$ \dots\ \qualid$_n$.} ~\par
+\item {\tt Recursive Extraction} \qualid$_1$ \dots\ \qualid$_n$. ~\par
Recursive extraction of all the globals (or modules) \qualid$_1$ \dots\
\qualid$_n$ and all their dependencies in the \Coq\ toplevel.
\end{description}
@@ -72,7 +72,7 @@ one monolithic file or one file per \Coq\ library.
to {\tt Recursive Extraction Library}, except that only the needed
parts of Coq libraries are extracted instead of the whole. The
naming convention in case of name clash is the same one as
- {\tt Extraction Library} : identifiers are here renamed
+ {\tt Extraction Library}: identifiers are here renamed
using prefixes \verb!coq_! or \verb!Coq_!.
\end{description}
@@ -101,19 +101,19 @@ induction principles we do not want to compute all the recursive calls
but only the needed ones). So the extraction mechanism provides an
automatic optimization routine that will be called each time the user
want to generate Ocaml programs. The optimizations can be split in two
-groups: the type preserving ones -- essentially constant inlining and
-reductions -- and the non type perserving ones -- some function
-abstractions of dummy types are removed when it's deemed safe in order
-to have more elegant types. Therefore some constants may not appear in
+groups: the type-preserving ones -- essentially constant inlining and
+reductions -- and the non type-preserving ones -- some function
+abstractions of dummy types are removed when it is deemed safe in order
+to have more elegant types. Therefore some constants may not appear in the
resulting monolithic Ocaml program. In the case of modular extraction,
even if some inlining is done, the inlined constant are nevertheless
printed, to ensure session-independent programs.
-Concerning Haskell, type preserving optimizations are less useful
+Concerning Haskell, type-preserving optimizations are less useful
because of lazyness. We still make some optimizations, for example in
order to produce more readable code.
-The type preserving optimizations are controled by the following \Coq\ options:
+The type-preserving optimizations are controlled by the following \Coq\ options:
\begin{description}
@@ -123,7 +123,7 @@ The type preserving optimizations are controled by the following \Coq\ options:
\item \comindex{Unset Extraction Optimize}
{\tt Unset Extraction Optimize.}
-Default is Set. This control all type preserving optimizations made on
+Default is Set. This controls all type-preserving optimizations made on
the ML terms (mostly reduction of dummy beta/iota redexes, but also
simplifications on Cases, etc). Put this option to Unset if you want a
ML term as close as possible to the Coq term.
@@ -134,9 +134,9 @@ ML term as close as possible to the Coq term.
\item \comindex{Unset Extraction Conservative Types}
{\tt Unset Extraction Conservative Types.}
-Default is Unset. This control the non type preserving optimizations
+Default is Unset. This controls the non type-preserving optimizations
made on ML terms (which try to avoid function abstraction of dummy
-types). If you turn this option to Set to make sure that {\tt e:t}
+types). Turn this option to Set to make sure that {\tt e:t}
implies that {\tt e':t'} where {\tt e'} and {\tt t'} are the extracted
code of {\tt e} and {\tt t} respectively.
@@ -146,7 +146,7 @@ code of {\tt e} and {\tt t} respectively.
\item \comindex{Unset Extraction KeepSingleton}
{\tt Unset Extraction KeepSingleton.}
-Default is Unset. Normaly, when the extraction of an inductive type
+Default is Unset. Normally, when the extraction of an inductive type
produces a singleton type (i.e. a type with only one constructor, and
only one argument to this constructor), the inductive structure is
removed and this type is seen as an alias to the inner type.
@@ -159,10 +159,10 @@ optimization when one wishes to preserve the inductive structure of types.
\item \comindex{Unset Extraction AutoInline}
{\tt Unset Extraction AutoInline.}
-Default is Set, so by default, the extraction mechanism feels free to
-inline the bodies of some defined constants, according to some heuristics
-like size of bodies, useness of some arguments, etc. Those heuristics are
-not always perfect, you may want to disable this feature, do it by Unset.
+Default is Set, so by default, the extraction mechanism is free to
+inline the bodies of some defined constants, according to some heuristics
+like size of bodies, uselessness of some arguments, etc. Those heuristics are
+not always perfect; if you want to disable this feature, do it by Unset.
\item \comindex{Extraction Inline}
{\tt Extraction Inline} \qualid$_1$ \dots\ \qualid$_n$.
@@ -170,7 +170,7 @@ not always perfect, you may want to disable this feature, do it by Unset.
\item \comindex{Extraction NoInline}
{\tt Extraction NoInline} \qualid$_1$ \dots\ \qualid$_n$.
-In addition to the automatic inline feature, you can now tell precisely to
+In addition to the automatic inline feature, you can tell to
inline some more constants by the {\tt Extraction Inline} command. Conversely,
you can forbid the automatic inlining of some specific constants by
the {\tt Extraction NoInline} command.
@@ -236,7 +236,7 @@ principles of extraction (logical parts and types).
Extraction will fail if it encounters an informative
axiom not realized (see Section~\ref{extraction:axioms}).
-A warning will be issued if it encounters an logical axiom, to remind
+A warning will be issued if it encounters a logical axiom, to remind the
user that inconsistent logical axioms may lead to incorrect or
non-terminating extracted terms.
@@ -264,7 +264,7 @@ realized and inlined axiom.
Of course, it is the responsibility of the user to ensure that the ML
terms given to realize the axioms do have the expected types. In
-fact, the strings containing realizing code are just copied in the
+fact, the strings containing realizing code are just copied to the
extracted files. The extraction recognizes whether the realized axiom
should become a ML type constant or a ML object declaration.
@@ -281,7 +281,7 @@ arity, that is a sequence of product finished by a sort), then some type
variables has to be given. The syntax is then:
\begin{description}
-\item{\tt Extract Constant \qualid\ \str$_1$ \ldots \str$_n$ => \str.} ~\par
+\item{\tt Extract Constant} \qualid\ \str$_1$ \dots\ \str$_n$ {\tt =>} \str.
\end{description}
The number of type variables is checked by the system.
@@ -312,8 +312,7 @@ types and constructors. For instance, the user may want to use the ML
native boolean type instead of \Coq\ one. The syntax is the following:
\begin{description}
-\item{\tt Extract Inductive \qualid\ => \str\ [ \str\ \dots \str\ ]\
-{\it optstring}.} ~\par
+\item{\tt Extract Inductive} \qualid\ {\tt =>} \str\ {\tt [} \str\ \dots\ \str\ {\tt ]}\ {\it optstring}.\par
Give an ML extraction for the given inductive type. You must specify
extractions for the type itself (first \str) and all its
constructors (between square brackets). If given, the final optional
@@ -344,7 +343,7 @@ is quite sound. But extracting to a general type (by providing an
ad-hoc pattern-matching) will often \emph{not} be fully rigorously
correct. For instance, when extracting {\tt nat} to Ocaml's {\tt
int}, it is theoretically possible to build {\tt nat} values that are
-larger than Ocaml's {\tt max\_int}. It is the user's responsability to
+larger than Ocaml's {\tt max\_int}. It is the user's responsibility to
be sure that no overflow or other bad events occur in practice.
\item Translating an inductive type to an ML type does \emph{not}
@@ -391,7 +390,7 @@ For instance the module {\tt List} exists both in \Coq\ and in Ocaml.
It is possible to instruct the extraction not to use particular filenames.
\begin{description}
-\item{\tt Extraction Blacklist \ident \ldots \ident.} ~\par
+\item{\tt Extraction Blacklist} \ident\ \dots\ \ident. ~\par
Instruct the extraction to avoid using these names as filenames
for extracted code.
\item{\tt Print Extraction Blacklist.} ~\par
@@ -425,13 +424,12 @@ Definition dp :=
fun (A B:Set)(x:A)(y:B)(f:forall C:Set, C->C) => (f A x, f B y).
\end{verbatim}
-In Ocaml, for instance, the direct extracted term would be:
-
+In Ocaml, for instance, the direct extracted term would be
\begin{verbatim}
let dp x y f = Pair((f () x),(f () y))
\end{verbatim}
-and would have type:
+and would have type
\begin{verbatim}
dp : 'a -> 'a -> (unit -> 'a -> 'b) -> ('b,'b) prod
\end{verbatim}
@@ -488,7 +486,7 @@ Inductive nat : Set :=
| S : nat -> nat.
\end{coq_example*}
-This module contains a theorem {\tt eucl\_dev}, whose type is:
+This module contains a theorem {\tt eucl\_dev}, whose type is
\begin{verbatim}
forall b:nat, b > 0 -> forall a:nat, diveucl a b
\end{verbatim}
@@ -531,9 +529,9 @@ val eucl_dev : nat -> nat -> diveucl = <fun>
It is easier to test on \ocaml\ integers:
\begin{verbatim}
# let rec nat_of_int = function 0 -> O | n -> S (nat_of_int (n-1));;
-val i2n : int -> nat = <fun>
+val nat_of_int : int -> nat = <fun>
# let rec int_of_nat = function O -> 0 | S p -> 1+(int_of_nat p);;
-val n2i : nat -> int = <fun>
+val int_of_nat : nat -> int = <fun>
# let div a b =
let Divex (q,r) = eucl_dev (nat_of_int b) (nat_of_int a)
in (int_of_nat q, int_of_nat r);;
diff --git a/doc/refman/Setoid.tex b/doc/refman/Setoid.tex
index ca416d7b1..3770ba574 100644
--- a/doc/refman/Setoid.tex
+++ b/doc/refman/Setoid.tex
@@ -14,8 +14,8 @@ custom strategies for rewriting.
This documentation is adapted from the previous setoid documentation by
Claudio Sacerdoti Coen (based on previous work by Cl\'ement Renard).
-The new implementation is a drop-in replacement for the old one \footnote{Nicolas
-Tabareau helped with the gluing}, hence most of the documentation still applies.
+The new implementation is a drop-in replacement for the old one,\footnote{Nicolas
+Tabareau helped with the gluing.} hence most of the documentation still applies.
The work is a complete rewrite of the previous implementation, based on
the type class infrastructure. It also improves on and generalizes
@@ -41,7 +41,7 @@ the previous implementation in several ways:
solution to a set of constraints which can be as fast as linear in the
size of the term, and the size of the proof term is linear
in the size of the original term. Besides, the extensibility allows the
- user to customize the proof-search if necessary.
+ user to customize the proof search if necessary.
\end{itemize}
\asection{Introduction to generalized rewriting}
@@ -274,7 +274,7 @@ Proof. exact (@union_compat A). Qed.
\end{cscexample}
-Is is possible to reduce the burden of specifying parameters using
+It is possible to reduce the burden of specifying parameters using
(maximally inserted) implicit arguments. If \texttt{A} is always set as
maximally implicit in the previous example, one can write:
@@ -516,7 +516,8 @@ Proof. intros. rewrite H. reflexivity. Qed.
The following tactics, all prefixed by \texttt{setoid\_},
deal with arbitrary
registered relations and morphisms. Moreover, all the corresponding unprefixed
-tactics (i.e. \texttt{reflexivity, symmetry, transitivity, replace, rewrite})
+tactics (i.e. \texttt{reflexivity}, \texttt{symmetry}, \texttt{transitivity},
+\texttt{replace}, \texttt{rewrite})
have been extended to fall back to their prefixed counterparts when
the relation involved is not Leibniz equality. Notice, however, that using
the prefixed tactics it is possible to pass additional arguments such as
@@ -720,7 +721,7 @@ The generalized rewriting tactic is based on a set of strategies that
can be combined to obtain custom rewriting procedures. Its set of
strategies is based on Elan's rewriting strategies
\cite{Luttik97specificationof}. Rewriting strategies are applied using
-the tactic $\texttt{rewrite\_strat}~s$ where $s$ is a strategy
+the tactic \texttt{rewrite\_strat $s$} where $s$ is a strategy
expression. Strategies are defined inductively as described by the
following grammar:
diff --git a/doc/refman/Universes.tex b/doc/refman/Universes.tex
index ccd180db6..530086961 100644
--- a/doc/refman/Universes.tex
+++ b/doc/refman/Universes.tex
@@ -10,8 +10,8 @@ This section describes the universe polymorphic extension of Coq.
Universe polymorphism allows writing generic definitions making use of
universes and reuse them at different and sometimes incompatible levels.
-A standard example of the difference between universe \emph{polymorphic} and
-\emph{monormorphic} definitions is given by the identity function:
+A standard example of the difference between universe \emph{polymorphic} and
+\emph{monomorphic} definitions is given by the identity function:
\begin{coq_example*}
Definition identity {A : Type} (a : A) := a.
@@ -30,7 +30,7 @@ Fail Definition selfid := identity (@identity).
\end{coq_example}
Indeed, the global level \texttt{Top.1} would have to be strictly smaller than itself
-for this self-application to typecheck, as the type of (@identity) is
+for this self-application to typecheck, as the type of \texttt{(@identity)} is
\texttt{forall (A : Type@{Top.1}), A -> A} whose type is itself \texttt{Type@{Top.1+1}}.
A universe polymorphic identity function binds its domain universe level
@@ -85,33 +85,36 @@ From this we can build a definition for the monoid of
\texttt{Set}-monoids (where multiplication would be given by the product
of monoids).
-\begin{coq_example}
+\begin{coq_example*}
Polymorphic Definition monoid_monoid : Monoid.
refine (@Build_Monoid Monoid unit_monoid _) ; admit.
Defined.
+\end{coq_example*}
+\begin{coq_example}
Print monoid_monoid.
\end{coq_example}
As one can see from the constraints, this monoid is ``large'', it lives
in a universe strictly higher than \texttt{Set}.
-\asubsection{\tt Polymorphic, Monomorphic}
+\asection{\tt Polymorphic, Monomorphic}
\comindex{Polymorphic}
\comindex{Monomorphic}
\comindex{Set Universe Polymorphism}
As shown in the examples, polymorphic definitions and inductives can be
declared using the \texttt{Polymorphic} prefix. There also exists an
-option \texttt{Set Universe Polymorphism} which will implicitely prepend
+option \texttt{Set Universe Polymorphism} which will implicitly prepend
it to any definition of the user. In that case, to make a definition
producing global universe constraints, one can use the
\texttt{Monomorphic} prefix. Many other commands support the
\texttt{Polymorphic} flag, including:
\begin{itemize}
-\item \texttt{Lemma}, \texttt{Axiom}, \ldots. All ``definition''
+\item \texttt{Lemma}, \texttt{Axiom}, and all the other ``definition''
keywords support polymorphism.
-\item \texttt{Variables, Context} in a section, this means that the
+\item \texttt{Variables}, \texttt{Context} in a section support polymorphism.
+ This means that the
variables are discharged polymorphically over definitions that use
them. In other words, two definitions in the section sharing a common
variable will both get parameterized by the universes produced by the
@@ -122,24 +125,24 @@ producing global universe constraints, one can use the
polymorphically, not at a single instance.
\end{itemize}
-\asubsection{Conversion and unification}
+\asection{Conversion and unification}
The semantics of conversion and unification have to be modified a little
-bit to account for the new universe instance arguments to polymorphic
+to account for the new universe instance arguments to polymorphic
references. The semantics respect the fact that definitions are
-transparent, so undistinguishable from their bodies during conversion.
+transparent, so indistinguishable from their bodies during conversion.
This is accomplished by changing one rule of unification, the
first-order approximation rule:
+TODO
+\asection{Explicit Universes}
-\asubsection{Explicit Universes}
-
-
+TODO
%%% Local Variables: