aboutsummaryrefslogtreecommitdiffhomepage
path: root/dev
diff options
context:
space:
mode:
Diffstat (limited to 'dev')
-rw-r--r--dev/base_include2
-rw-r--r--dev/ci/README.md38
-rw-r--r--dev/ci/ci-common.sh29
-rwxr-xr-xdev/ci/ci-geocoq.sh2
-rw-r--r--dev/ci/docker/bionic_coq/Dockerfile37
-rw-r--r--dev/ci/user-overlays/00664-herbelin-master+change-for-coq-pr664-compatibility.sh4
-rw-r--r--dev/ci/user-overlays/07677-ejgallego-misctypes+bye2.sh8
-rw-r--r--dev/ci/user-overlays/README.md4
-rw-r--r--dev/doc/MERGING.md20
-rw-r--r--dev/doc/primproj.md41
-rw-r--r--dev/doc/universes.md (renamed from dev/doc/univpoly.txt)191
-rw-r--r--dev/doc/universes.txt26
-rw-r--r--dev/vm_printers.ml5
13 files changed, 208 insertions, 199 deletions
diff --git a/dev/base_include b/dev/base_include
index fc38305cc..574bc097e 100644
--- a/dev/base_include
+++ b/dev/base_include
@@ -108,8 +108,6 @@ open Inductiveops
open Locusops
open Find_subterm
open Unification
-open Miscops
-open Miscops
open Nativenorm
open Typeclasses
open Typeclasses_errors
diff --git a/dev/ci/README.md b/dev/ci/README.md
index 697a160ca..665b3768a 100644
--- a/dev/ci/README.md
+++ b/dev/ci/README.md
@@ -47,15 +47,15 @@ CI.
### Add your development by submitting a pull request
-Add a new `ci-mydev.sh` script to [`dev/ci`](/dev/ci) (have a look at
-[`ci-coq-dpdgraph.sh`](/dev/ci/ci-coq-dpdgraph.sh) or
-[`ci-fiat-parsers.sh`](/dev/ci/ci-fiat-parsers.sh) for simple examples);
+Add a new `ci-mydev.sh` script to [`dev/ci`](.) (have a look at
+[`ci-coq-dpdgraph.sh`](ci-coq-dpdgraph.sh) or
+[`ci-fiat-parsers.sh`](ci-fiat-parsers.sh) for simple examples);
set the corresponding variables in
-[`ci-basic-overlay.sh`](/dev/ci/ci-basic-overlay.sh); add the corresponding
-target to [`Makefile.ci`](/Makefile.ci); add new jobs to
-[`.gitlab-ci.yml`](/.gitlab-ci.yml),
-[`.circleci/config.yml`](/.circleci/config.yml) and
-[`.travis.yml`](/.travis.yml) so that this new target is run. **Do not
+[`ci-basic-overlay.sh`](ci-basic-overlay.sh); add the corresponding
+target to [`Makefile.ci`](../../Makefile.ci); add new jobs to
+[`.gitlab-ci.yml`](../../.gitlab-ci.yml),
+[`.circleci/config.yml`](../../.circleci/config.yml) and
+[`.travis.yml`](../../.travis.yml) so that this new target is run. **Do not
hesitate to submit an incomplete pull request if you need help to finish it.**
You may also be interested in having your development tested in our
@@ -83,12 +83,17 @@ We are currently running tests on the following platforms:
- Travis CI is used to test the compilation of Coq and run the test-suite on
macOS. It also runs a linter that checks whitespace discipline. A
- [pre-commit hook](/dev/tools/pre-commit) is automatically installed by
+ [pre-commit hook](../tools/pre-commit) is automatically installed by
`./configure`. It should allow complying with this discipline without pain.
- AppVeyor is used to test the compilation of Coq and run the test-suite on
Windows.
+GitLab CI and Travis CI and AppVeyor support putting `[ci skip]` in a commit
+message to bypass CI. Do not use this unless your commit only changes files
+that are not compiled (e.g. Markdown files like this one, or files under
+[`.github/`](../../.github/)).
+
You can anticipate the results of most of these tests prior to submitting your
PR by running GitLab CI on your private branches. To do so follow these steps:
@@ -107,7 +112,7 @@ there are some.
You can also run one CI target locally (using `make ci-somedev`).
-See also [`test-suite/README.md`](/test-suite/README.md) for information about adding new tests to the test-suite.
+See also [`test-suite/README.md`](../../test-suite/README.md) for information about adding new tests to the test-suite.
### Breaking changes
@@ -118,7 +123,7 @@ patch (or ask someone to prepare a patch) to fix the project:
the project to your changes.
2. Test your pull request with your adapted version of the external project by
adding an overlay file to your pull request (cf.
- [`dev/ci/user-overlays/README.md`](/dev/ci/user-overlays/README.md)).
+ [`dev/ci/user-overlays/README.md`](user-overlays/README.md)).
3. Fixes to external libraries (pure Coq projects) *must* be backward
compatible (i.e. they should also work with the development version of Coq,
and the latest stable version). This will allow you to open a PR on the
@@ -132,7 +137,7 @@ patch (or ask someone to prepare a patch) to fix the project:
developer who merges the PR on Coq. There are plans to improve this, cf.
[#6724](https://github.com/coq/coq/issues/6724).
-Moreover your PR must absolutely update the [`CHANGES`](/CHANGES) file.
+Moreover your PR must absolutely update the [`CHANGES`](../../CHANGES) file.
Advanced GitLab CI information
------------------------------
@@ -168,8 +173,9 @@ automatically built and uploaded to your GitLab registry, and is
loaded by subsequent jobs.
**IMPORTANT**: When updating Coq's CI docker image, you must modify
-the `CACHEKEY` variable in `.gitlab-ci.yml`, `.circleci/config.yml`,
-and `Dockerfile`.
+the `CACHEKEY` variable in [`.gitlab-ci.yml`](../../.gitlab-ci.yml),
+[`.circleci/config.yml`](../../.circleci/config.yml),
+and [`Dockerfile`](docker/bionic_coq/Dockerfile)
The Docker building job reuses the uploaded image if it is available,
but if you wish to save more time you can skip the job by setting
@@ -177,4 +183,6 @@ but if you wish to save more time you can skip the job by setting
This means you will need to change its value when the Docker image
needs to be updated. You can do so for a single pipeline by starting
-it through the web interface.
+it through the web interface..
+
+See also [`docker/README.md`](docker/README.md).
diff --git a/dev/ci/ci-common.sh b/dev/ci/ci-common.sh
index 5b5cbd11a..85df249d3 100644
--- a/dev/ci/ci-common.sh
+++ b/dev/ci/ci-common.sh
@@ -93,17 +93,26 @@ install_ssreflect()
git_checkout "${mathcomp_CI_BRANCH}" "${mathcomp_CI_GITURL}" "${mathcomp_CI_DIR}"
( cd "${mathcomp_CI_DIR}/mathcomp" && \
- sed -i.bak '/ssrtest/d' Make && \
- sed -i.bak '/odd_order/d' Make && \
- sed -i.bak '/all\/all.v/d' Make && \
- sed -i.bak '/character/d' Make && \
- sed -i.bak '/real_closed/d' Make && \
- sed -i.bak '/solvable/d' Make && \
- sed -i.bak '/field/d' Make && \
- sed -i.bak '/fingroup/d' Make && \
- sed -i.bak '/algebra/d' Make && \
- make Makefile.coq && make -f Makefile.coq all && make install )
+ make Makefile.coq && \
+ make -f Makefile.coq ssreflect/all_ssreflect.vo && \
+ make -f Makefile.coq install )
echo -en 'travis_fold:end:ssr.install\\r'
}
+
+# this installs just the ssreflect + algebra library of math-comp
+install_ssralg()
+{
+ echo 'Installing ssralg' && echo -en 'travis_fold:start:ssralg.install\\r'
+
+ git_checkout "${mathcomp_CI_BRANCH}" "${mathcomp_CI_GITURL}" "${mathcomp_CI_DIR}"
+
+ ( cd "${mathcomp_CI_DIR}/mathcomp" && \
+ make Makefile.coq && \
+ make -f Makefile.coq algebra/all_algebra.vo && \
+ make -f Makefile.coq install )
+
+ echo -en 'travis_fold:end:ssralg.install\\r'
+
+}
diff --git a/dev/ci/ci-geocoq.sh b/dev/ci/ci-geocoq.sh
index 920272bcf..24cd9c427 100755
--- a/dev/ci/ci-geocoq.sh
+++ b/dev/ci/ci-geocoq.sh
@@ -7,4 +7,6 @@ GeoCoq_CI_DIR="${CI_BUILD_DIR}/GeoCoq"
git_checkout "${GeoCoq_CI_BRANCH}" "${GeoCoq_CI_GITURL}" "${GeoCoq_CI_DIR}"
+install_ssralg
+
( cd "${GeoCoq_CI_DIR}" && ./configure.sh && make )
diff --git a/dev/ci/docker/bionic_coq/Dockerfile b/dev/ci/docker/bionic_coq/Dockerfile
index a1178ee2a..52f851917 100644
--- a/dev/ci/docker/bionic_coq/Dockerfile
+++ b/dev/ci/docker/bionic_coq/Dockerfile
@@ -1,4 +1,4 @@
-# CACHEKEY: "bionic_coq-V2018-05-07-V2"
+# CACHEKEY: "bionic_coq-V2018-06-13-V1"
# ^^ Update when modifying this file.
FROM ubuntu:bionic
@@ -8,7 +8,7 @@ ENV DEBIAN_FRONTEND="noninteractive"
RUN apt-get update -qq && apt-get install -y -qq m4 wget time gcc-multilib opam \
libgtk2.0-dev libgtksourceview2.0-dev \
- texlive-latex-extra texlive-fonts-recommended hevea \
+ texlive-latex-extra texlive-fonts-recommended texlive-science \
python3-sphinx python3-pexpect python3-sphinx-rtd-theme python3-bs4 python3-sphinxcontrib.bibtex python3-pip
RUN pip3 install antlr4-python3-runtime
@@ -19,15 +19,19 @@ ENV NJOBS="2" \
OPAMROOTISOK="true"
# Base opam is the set of base packages required by Coq
-ENV COMPILER="4.02.3" \
- BASE_OPAM="num ocamlfind jbuilder ounit"
+ENV COMPILER="4.02.3"
RUN opam init -a -y -j $NJOBS --compiler="$COMPILER" default https://opam.ocaml.org && eval $(opam config env) && opam update
-# Setup of the base switch; CI_OPAM contains Coq's CI dependencies.
+# Common OPAM packages.
+# `num` does not have a version number as the right version to install varies
+# with the compiler version.
+ENV BASE_OPAM="num ocamlfind.1.8.0 jbuilder.1.0+beta20 ounit.2.0.8" \
+ CI_OPAM="menhir.20180530 elpi.1.0.4 ocamlgraph.1.8.8"
+
+# BASE switch; CI_OPAM contains Coq's CI dependencies.
ENV CAMLP5_VER="6.14" \
- COQIDE_OPAM="lablgtk.2.18.5 conf-gtksourceview.2" \
- CI_OPAM="menhir elpi ocamlgraph"
+ COQIDE_OPAM="lablgtk.2.18.5 conf-gtksourceview.2"
RUN opam switch -y -j $NJOBS "$COMPILER" && eval $(opam config env) && \
opam install -j $NJOBS $BASE_OPAM camlp5.$CAMLP5_VER $COQIDE_OPAM $CI_OPAM
@@ -36,14 +40,15 @@ RUN opam switch -y -j $NJOBS "$COMPILER" && eval $(opam config env) && \
RUN opam switch -y -j $NJOBS "${COMPILER}+32bit" && eval $(opam config env) && \
opam install -j $NJOBS $BASE_OPAM camlp5.$CAMLP5_VER
-# BE switch
-ENV COMPILER_BE="4.06.1" \
- CAMLP5_VER_BE="7.05" \
- COQIDE_OPAM_BE="lablgtk.2.18.6 conf-gtksourceview.2"
+# EDGE switch
+ENV COMPILER_EDGE="4.06.1" \
+ CAMLP5_VER_EDGE="7.05" \
+ COQIDE_OPAM_EDGE="lablgtk.2.18.6 conf-gtksourceview.2"
-RUN opam switch -y -j $NJOBS $COMPILER_BE && eval $(opam config env) && \
- opam install -j $NJOBS $BASE_OPAM camlp5.$CAMLP5_VER_BE $COQIDE_OPAM_BE
+RUN opam switch -y -j $NJOBS $COMPILER_EDGE && eval $(opam config env) && \
+ opam install -j $NJOBS $BASE_OPAM camlp5.$CAMLP5_VER_EDGE $COQIDE_OPAM_EDGE
-# BE+flambda switch
-RUN opam switch -y -j $NJOBS "${COMPILER_BE}+flambda" && eval $(opam config env) && \
- opam install -j $NJOBS $BASE_OPAM camlp5.$CAMLP5_VER_BE $COQIDE_OPAM_BE $CI_OPAM
+# EDGE+flambda switch, we install CI_OPAM as to be able to use
+# `ci-template-flambda` with everything.
+RUN opam switch -y -j $NJOBS "${COMPILER_EDGE}+flambda" && eval $(opam config env) && \
+ opam install -j $NJOBS $BASE_OPAM camlp5.$CAMLP5_VER_EDGE $COQIDE_OPAM_EDGE $CI_OPAM
diff --git a/dev/ci/user-overlays/00664-herbelin-master+change-for-coq-pr664-compatibility.sh b/dev/ci/user-overlays/00664-herbelin-master+change-for-coq-pr664-compatibility.sh
new file mode 100644
index 000000000..9d96b6d4c
--- /dev/null
+++ b/dev/ci/user-overlays/00664-herbelin-master+change-for-coq-pr664-compatibility.sh
@@ -0,0 +1,4 @@
+ if [ "$CI_PULL_REQUEST" = "664" ] || [ "$CI_BRANCH" = "trunk+fix-5500-too-weak-test-return-clause" ]; then
+ fiat_parsers_CI_BRANCH=master+change-for-coq-pr664-compatibility
+ fiat_parsers_CI_GITURL=https://github.com/herbelin/fiat
+fi
diff --git a/dev/ci/user-overlays/07677-ejgallego-misctypes+bye2.sh b/dev/ci/user-overlays/07677-ejgallego-misctypes+bye2.sh
new file mode 100644
index 000000000..b4f716139
--- /dev/null
+++ b/dev/ci/user-overlays/07677-ejgallego-misctypes+bye2.sh
@@ -0,0 +1,8 @@
+_OVERLAY_BRANCH=misctypes+bye2
+
+if [ "$CI_PULL_REQUEST" = "7677" ] || [ "$CI_BRANCH" = "_OVERLAY_BRANCH" ]; then
+
+ Equations_CI_BRANCH="$_OVERLAY_BRANCH"
+ Equations_CI_GITURL=https://github.com/ejgallego/Coq-Equations
+
+fi
diff --git a/dev/ci/user-overlays/README.md b/dev/ci/user-overlays/README.md
index aec2dfe0a..41212568d 100644
--- a/dev/ci/user-overlays/README.md
+++ b/dev/ci/user-overlays/README.md
@@ -6,7 +6,7 @@ request to test it with the adapted version of the external project.
An overlay is a file which defines where to look for the patched version so that
testing is possible. It redefines some variables from
-[`ci-basic-overlay.sh`](/dev/ci/ci-basic-overlay.sh):
+[`ci-basic-overlay.sh`](../ci-basic-overlay.sh):
give the name of your branch using a `_CI_BRANCH` variable and the location of
your fork using a `_CI_GITURL` variable.
@@ -28,4 +28,4 @@ if [ "$CI_PULL_REQUEST" = "669" ] || [ "$CI_BRANCH" = "ssr-merge" ]; then
fi
```
-(`CI_PULL_REQUEST` and `CI_BRANCH` are set in [`ci-common.sh`](/dev/ci/ci-common.sh))
+(`CI_PULL_REQUEST` and `CI_BRANCH` are set in [`ci-common.sh`](../ci-common.sh))
diff --git a/dev/doc/MERGING.md b/dev/doc/MERGING.md
index 65457b63a..c0cd9c8cd 100644
--- a/dev/doc/MERGING.md
+++ b/dev/doc/MERGING.md
@@ -6,7 +6,7 @@ This document describes how patches, submitted as pull requests (PRs) on the
## Code owners
-The [CODEOWNERS](/.github/CODEOWNERS) file describes, for each part of the
+The [CODEOWNERS](../../.github/CODEOWNERS) file describes, for each part of the
system, two owners. One is the principal maintainer of the component, the other
is the secondary maintainer.
@@ -51,10 +51,10 @@ say in a comment they think a review is not required and proceed with the merge.
If the PR breaks compatibility of some external projects in CI, then fixes to
those external projects should have been prepared (cf. the relevant sub-section
-in the [CI README](/dev/ci/README.md#Breaking-changes) and the PR can be tested
-with these fixes thanks to ["overlays"](/dev/ci/user-overlays/README.md).
+in the [CI README](../ci/README.md#Breaking-changes) and the PR can be tested
+with these fixes thanks to ["overlays"](../ci/user-overlays/README.md).
-Moreover the PR must absolutely update the [`CHANGES`](/CHANGES) file.
+Moreover the PR must absolutely update the [`CHANGES`](../../CHANGES) file.
If overlays are missing, ask the author to prepare them and label the PR with
the [needs: overlay](https://github.com/coq/coq/labels/needs%3A%20overlay) label.
@@ -74,7 +74,17 @@ Once all reviewers approved the PR, the assignee is expected to check that CI
completed without relevant failures, and that the PR comes with appropriate
documentation and test cases. If not, they should leave a comment on the PR and
put the approriate label. Otherwise, they are expected to merge the PR using the
-[merge script](/dev/tools/merge-pr.sh).
+[merge script](../tools/merge-pr.sh).
+
+When CI has a few failures which look spurious, restarting the corresponding
+jobs is a good way of ensuring this was indeed the case.
+To restart a job on Travis, you should connect using your GitHub account;
+being part of the Coq organization on GitHub should give you the permission
+to do so.
+To restart a job on GitLab CI, you should sign into GitLab (this can be done
+using a GitHub account); if you are part of the
+[Coq organization on GitLab](https://gitlab.com/coq), you should see a "Retry"
+button; otherwise, send a request to join the organization.
When the PR has conflicts, the assignee can either:
- ask the author to rebase the branch, fixing the conflicts
diff --git a/dev/doc/primproj.md b/dev/doc/primproj.md
new file mode 100644
index 000000000..ea76aeeab
--- /dev/null
+++ b/dev/doc/primproj.md
@@ -0,0 +1,41 @@
+Primitive Projections
+---------------------
+
+ | Proj of Projection.t * constr
+
+Projections are always applied to a term, which must be of a record
+type (i.e. reducible to an inductive type `I params`). Type-checking,
+reduction and conversion are fast (not as fast as they could be yet)
+because we don't keep parameters around. As you can see, it's
+currently a `constant` that is used here to refer to the projection,
+that will change to an abstract `projection` type in the future.
+Basically a projection constant records which inductive it is a
+projection for, the number of params and the actual position in the
+constructor that must be projected. For compatibility reason, we also
+define an eta-expanded form (accessible from user syntax `@f`). The
+constant_entry of a projection has both informations. Declaring a
+record (under `Set Primitive Projections`) will generate such
+definitions. The API to declare them is not stable at the moment, but
+the inductive type declaration also knows about the projections, i.e.
+a record inductive type decl contains an array of terms representing
+the projections. This is used to implement eta-conversion for record
+types (with at least one field and having all projections definable).
+The canonical value being `Build_R (pn x) ... (pn x)`. Unification and
+conversion work up to this eta rule. The records can also be universe
+polymorphic of course, and we don't need to keep track of the universe
+instance for the projections either. Projections are reduced _eagerly_
+everywhere, and introduce a new `Zproj` constructor in the abstract
+machines that obeys both the delta (for the constant opacity) and iota
+laws (for the actual reduction). Refolding works as well (afaict), but
+there is a slight hack there related to universes (not projections).
+
+For the ML programmer, the biggest change is that pattern-matchings on
+kind_of_term require an additional case, that is handled usually
+exactly like an `App (Const p) arg`.
+
+There are slight hacks related to hints is well, to use the primitive
+projection form of f when one does `Hint Resolve f`. Usually hint
+resolve will typecheck the term, resulting in a partially applied
+projection (disallowed), so we allow it to take
+`constr_or_global_reference` arguments instead and special-case on
+projections. Other tactic extensions might need similar treatment.
diff --git a/dev/doc/univpoly.txt b/dev/doc/universes.md
index ca3d520c7..c276603ed 100644
--- a/dev/doc/univpoly.txt
+++ b/dev/doc/universes.md
@@ -1,11 +1,11 @@
-Notes on universe polymorphism and primitive projections, M. Sozeau
-===================================================================
+Notes on universe polymorphism
+------------------------------
-The new implementation of universe polymorphism and primitive
-projections introduces a few changes to the API of Coq. First and
-foremost, the term language changes, as global references now carry a
-universe level substitution:
+The implementation of universe polymorphism introduces a few changes
+to the API of Coq. First and foremost, the term language changes, as
+global references now carry a universe level substitution:
+~~~ocaml
type 'a puniverses = 'a * Univ.Instance.t
type pconstant = constant puniverses
type pinductive = inductive puniverses
@@ -15,30 +15,31 @@ type constr = ...
| Const of puniverses
| Ind of pinductive
| Constr of pconstructor
- | Proj of constant * constr
-
+~~~
Universes
-=========
+---------
- Universe instances (an array of levels) gets substituted when
+Universe instances (an array of levels) gets substituted when
unfolding definitions, are used to typecheck and are unified according
-to the rules in the ITP'14 paper on universe polymorphism in Coq.
+to the rules in the ITP'14 paper on universe polymorphism in Coq.
+~~~ocaml
type Level.t = Set | Prop | Level of int * dirpath (* hashconsed *)
type Instance.t = Level.t array
type Universe.t = Level.t list (* hashconsed *)
+~~~
The universe module defines modules and abstract types for levels,
universes etc.. Structures are hashconsed (with a hack to take care
-of the fact that deserialization breaks sharing).
+of the fact that deserialization breaks sharing).
- Definitions (constants, inductives) now carry around not only
+ Definitions (constants, inductives) now carry around not only
constraints but also the universes they introduced (a Univ.UContext.t).
-There is another kind of contexts [Univ.ContextSet.t], the latter has
+There is another kind of contexts `Univ.ContextSet.t`, the latter has
a set of universes, while the former has serialized the levels in an
-array, and is used for polymorphic objects. Both have "reified"
-constraints depending on global and local universes.
+array, and is used for polymorphic objects. Both have "reified"
+constraints depending on global and local universes.
A polymorphic definition is abstract w.r.t. the variables in this
context, while a monomorphic one (or template polymorphic) just adds the
@@ -46,18 +47,18 @@ universes and constraints to the global universe context when it is put
in the environment. No other universes than the global ones and the
declared local ones are needed to check a declaration, hence the kernel
does not produce any constraints anymore, apart from module
-subtyping.... There are hence two conversion functions now: [check_conv]
-and [infer_conv]: the former just checks the definition in the current env
+subtyping.... There are hence two conversion functions now: `check_conv`
+and `infer_conv`: the former just checks the definition in the current env
(in which we usually push_universe_context of the associated context),
-and [infer_conv] which produces constraints that were not implied by the
+and `infer_conv` which produces constraints that were not implied by the
ambient constraints. Ideally, that one could be put out of the kernel,
-but currently module subtyping needs it.
+but currently module subtyping needs it.
Inference of universes is now done during refinement, and the evar_map
carries the incrementally built universe context, starting from the
-global universe constraints (see [Evd.from_env]). [Evd.conversion] is a
-wrapper around [infer_conv] that will do the bookkeeping for you, it
-uses [evar_conv_x]. There is a universe substitution being built
+global universe constraints (see `Evd.from_env`). `Evd.conversion` is a
+wrapper around `infer_conv` that will do the bookkeeping for you, it
+uses `evar_conv_x`. There is a universe substitution being built
incrementally according to the constraints, so one should normalize at
the end of a proof (or during a proof) with that substitution just like
we normalize evars. There are some nf_* functions in
@@ -67,16 +68,16 @@ the universe constraints used in the term. It is heuristic but
validity-preserving. No user-introduced universe (i.e. coming from a
user-written anonymous Type) gets touched by this, only the fresh
universes generated for each global application. Using
-
+~~~ocaml
val pf_constr_of_global : Globnames.global_reference -> (constr -> tactic) -> tactic
-
+~~~
Is the way to make a constr out of a global reference in the new API.
If they constr is polymorphic, it will add the necessary constraints to
the evar_map. Even if a constr is not polymorphic, we have to take care
of keeping track of its universes. Typically, using:
-
- mkApp (coq_id_function, [| A; a |])
-
+~~~ocaml
+ mkApp (coq_id_function, [| A; a |])
+~~~
and putting it in a proof term is not enough now. One has to somehow
show that A's type is in cumululativity relation with id's type
argument, incurring a universe constraint. To do this, one can simply
@@ -84,19 +85,19 @@ call Typing.resolve_evars env evdref c which will do some infer_conv to
produce the right constraints and put them in the evar_map. Of course in
some cases you might know from an invariant that no new constraint would
be produced and get rid of it. Anyway the kernel will tell you if you
-forgot some. As a temporary way out, [Universes.constr_of_global] allows
+forgot some. As a temporary way out, `Universes.constr_of_global` allows
you to make a constr from any non-polymorphic constant, but it will fail
on polymorphic ones.
Other than that, unification (w_unify and evarconv) now take account of universes and
produce only well-typed evar_maps.
-Some syntactic comparisons like the one used in [change] have to be
-adapted to allow identification up-to-universes (when dealing with
-polymorphic references), [make_eq_univs_test] is there to help.
+Some syntactic comparisons like the one used in `change` have to be
+adapted to allow identification up-to-universes (when dealing with
+polymorphic references), `make_eq_univs_test` is there to help.
In constr, there are actually many new comparison functions to deal with
that:
-
+~~~ocaml
(** [equal a b] is true if [a] equals [b] modulo alpha, casts,
and application grouping *)
val equal : constr -> constr -> bool
@@ -105,7 +106,7 @@ val equal : constr -> constr -> bool
application grouping and the universe equalities in [u]. *)
val eq_constr_univs : constr Univ.check_function
-(** [leq_constr_univs u a b] is [true] if [a] is convertible to [b] modulo
+(** [leq_constr_univs u a b] is [true] if [a] is convertible to [b] modulo
alpha, casts, application grouping and the universe inequalities in [u]. *)
val leq_constr_univs : constr Univ.check_function
@@ -120,47 +121,47 @@ val leq_constr_universes : constr -> constr -> bool Univ.universe_constrained
(** [eq_constr_univs a b] [true, c] if [a] equals [b] modulo alpha, casts,
application grouping and ignoring universe instances. *)
val eq_constr_nounivs : constr -> constr -> bool
-
-The [_univs] versions are doing checking of universe constraints
-according to a graph, while the [_universes] are producing (non-atomic)
+~~~
+The `_univs` versions are doing checking of universe constraints
+according to a graph, while the `_universes` are producing (non-atomic)
universe constraints. The non-atomic universe constraints include the
-[ULub] constructor: when comparing [f (* u1 u2 *) c] and [f (* u1' u2'
-*) c] we add ULub constraints on [u1, u1'] and [u2, u2']. These are
-treated specially: as unfolding [f] might not result in these
+`ULub` constructor: when comparing `f (* u1 u2 *) c` and `f (* u1' u2'
+*) c` we add ULub constraints on `u1, u1'` and `u2, u2'`. These are
+treated specially: as unfolding `f` might not result in these
unifications, we need to keep track of the fact that failure to satisfy
them does not mean that the term are actually equal. This is used in
-unification but probably not necessary to the average programmer.
+unification but probably not necessary to the average programmer.
Another issue for ML programmers is that tables of constrs now usually
-need to take a [constr Univ.in_universe_context_set] instead, and
-properly refresh the universes context when using the constr, e.g. using
-Clenv.refresh_undefined_univs clenv or:
-
+need to take a `constr Univ.in_universe_context_set` instead, and
+properly refresh the universes context when using the constr, e.g. using
+Clenv.refresh_undefined_univs clenv or:
+~~~ocaml
(** Get fresh variables for the universe context.
Useful to make tactics that manipulate constrs in universe contexts polymorphic. *)
-val fresh_universe_context_set_instance : universe_context_set ->
+val fresh_universe_context_set_instance : universe_context_set ->
universe_level_subst * universe_context_set
-
-The substitution should be applied to the constr(s) under consideration,
+~~~
+The substitution should be applied to the constr(s) under consideration,
and the context_set merged with the current evar_map with:
-
+~~~ocaml
val merge_context_set : rigid -> evar_map -> Univ.universe_context_set -> evar_map
-
-The [rigid] flag here should be [Evd.univ_flexible] most of the
+~~~
+The `rigid` flag here should be `Evd.univ_flexible` most of the
time. This means the universe levels of polymorphic objects in the
-constr might get instantiated instead of generating equality constraints
+constr might get instantiated instead of generating equality constraints
(Evd.univ_rigid does that).
-On this issue, I recommend forcing commands to take [global_reference]s
+On this issue, I recommend forcing commands to take `global_reference`s
only, the user can declare his specialized terms used as hints as
constants and this is cleaner. Alas, backward-compatibility-wise,
this is the only solution I found. In the case of global_references
-only, it's just a matter of using [Evd.fresh_global] /
-[pf_constr_of_global] to let the system take care of universes.
+only, it's just a matter of using `Evd.fresh_global` /
+`pf_constr_of_global` to let the system take care of universes.
The universe graph
-==================
+------------------
To accomodate universe polymorphic definitions, the graph structure in
kernel/univ.ml was modified. The new API forces every universe to be
@@ -176,68 +177,14 @@ no universe i can be set lower than Set, so the chain of universes
always bottoms down at Prop < Set.
Modules
-=======
+-------
One has to think of universes in modules as being globally declared, so
when including a module (type) which declares a type i (e.g. through a
parameter), we get back a copy of i and not some fresh universe.
-Projections
-===========
-
- | Proj of constant * constr
-
-Projections are always applied to a term, which must be of a record type
-(i.e. reducible to an inductive type [I params]). Type-checking,
-reduction and conversion are fast (not as fast as they could be yet)
-because we don't keep parameters around. As you can see, it's currently
-a [constant] that is used here to refer to the projection, that will
-change to an abstract [projection] type in the future. Basically a
-projection constant records which inductive it is a projection for, the
-number of params and the actual position in the constructor that must be
-projected. For compatibility reason, we also define an eta-expanded form
-(accessible from user syntax @f). The constant_entry of a projection has
-both informations. Declaring a record (under [Set Primitive
-Projections]) will generate such definitions. The API to declare them is
-not stable at the moment, but the inductive type declaration also knows
-about the projections, i.e. a record inductive type decl contains an
-array of terms representing the projections. This is used to implement
-eta-conversion for record types (with at least one field and having all
-projections definable). The canonical value being [Build_R (pn x)
-... (pn x)]. Unification and conversion work up to this eta rule. The
-records can also be universe polymorphic of course, and we don't need to
-keep track of the universe instance for the projections either.
-Projections are reduced _eagerly_ everywhere, and introduce a new Zproj
-constructor in the abstract machines that obeys both the delta (for the
-constant opacity) and iota laws (for the actual reduction). Refolding
-works as well (afaict), but there is a slight hack there related to
-universes (not projections).
-
-For the ML programmer, the biggest change is that pattern-matchings on
-kind_of_term require an additional case, that is handled usually exactly
-like an [App (Const p) arg].
-
-There are slight hacks related to hints is well, to use the primitive
-projection form of f when one does [Hint Resolve f]. Usually hint
-resolve will typecheck the term, resulting in a partially applied
-projection (disallowed), so we allow it to take
-[constr_or_global_reference] arguments instead and special-case on
-projections. Other tactic extensions might need similar treatment.
-
-WIP
-===
-
-- [vm_compute] does not deal with universes and projections correctly,
-except when it goes to a normal form with no projections or polymorphic
-constants left (the most common case). E.g. Ring with Set Universe
-Polymorphism and Set Primitive Projections work (at least it did at some
-point, I didn't recheck yet).
-
-- [native_compute] works with universes and projections.
-
-
Incompatibilities
-=================
+-----------------
Old-style universe polymorphic definitions were implemented by taking
advantage of the fact that elaboration (i.e., pretyping and unification)
@@ -247,33 +194,33 @@ possible, as unification ensures that the substitution is built is
entirely well-typed, even w.r.t universes. This means that some terms
that type-checked before no longer do, especially projections of the
pair:
-
+~~~coq
@fst ?x ?y : prod ?x ?y : Type (max(Datatypes.i, Datatypes.j)).
-
+~~~
The "template universe polymorphic" variables i and j appear during
typing without being refreshed, meaning that they can be lowered (have
upper constraints) with user-introduced universes. In most cases this
won't work, so ?x and ?y have to be instantiated earlier, either from
the type of the actual projected pair term (some t : prod A B) or the
-typing constraint. Adding the correct type annotations will always fix
+typing constraint. Adding the correct type annotations will always fix
this.
Unification semantics
-=====================
+---------------------
In Ltac, matching with:
-- a universe polymorphic constant [c] matches any instance of the
+- a universe polymorphic constant `c` matches any instance of the
constant.
-- a variable ?x already bound to a term [t] (non-linear pattern) uses
+- a variable ?x already bound to a term `t` (non-linear pattern) uses
strict equality of universes (e.g., Type@{i} and Type@{j} are not
equal).
In tactics:
-- [change foo with bar], [pattern foo] will unify all instances of [foo]
- (and convert them with [bar]). This might incur unifications of
- universes. [change] uses conversion while [pattern] only does
+- `change foo with bar`, `pattern foo` will unify all instances of `foo`
+ (and convert them with `bar`). This might incur unifications of
+ universes. `change` uses conversion while `pattern` only does
syntactic matching up-to unification of universes.
-- [apply], [refine] use unification up to universes.
+- `apply`, `refine` use unification up to universes.
diff --git a/dev/doc/universes.txt b/dev/doc/universes.txt
deleted file mode 100644
index a40706e99..000000000
--- a/dev/doc/universes.txt
+++ /dev/null
@@ -1,26 +0,0 @@
-How to debug universes?
-
-1. There is a command Print Universes in Coq toplevel
-
- Print Universes.
- prints the graph of universes in the form of constraints
-
- Print Universes "file".
- produces the "file" containing universe constraints in the form
- univ1 # univ2 ;
- where # can be either > >= or =
-
- If "file" ends with .gv or .dot, the resulting file will be in
- dot format.
-
-
- *) for dot see http://www.research.att.com/sw/tools/graphviz/
-
-
-2. There is a printing option
-
- {Set,Unset} Printing Universes.
-
- which, when set, makes all pretty-printed Type's annotated with the
- name of the universe.
-
diff --git a/dev/vm_printers.ml b/dev/vm_printers.ml
index 16917586f..7589e5348 100644
--- a/dev/vm_printers.ml
+++ b/dev/vm_printers.ml
@@ -15,7 +15,10 @@ let ppripos (ri,pos) =
| Reloc_const _ ->
print_string "structured constant\n"
| Reloc_getglobal kn ->
- print_string ("getglob "^(Constant.to_string kn)^"\n"));
+ print_string ("getglob "^(Constant.to_string kn)^"\n")
+ | Reloc_proj_name p ->
+ print_string ("proj "^(Constant.to_string p)^"\n")
+ );
print_flush ()
let print_vfix () = print_string "vfix"