| Commit message (Collapse) | Author | Age |
|\ |
|
| |
| |
| |
| | |
repository. Also removing FAQ-related build rules.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
When statically linking plugins, the "DECLARE PLUGIN" macro takes care
of properly setting up the loaded module table.
This setup was also done by `coqmktop`, thus in order to ease
bisecting, we didn't take care of it in the `coqmktop` deprecation.
Fixes #6364.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We remove coqmktop in favor of a couple of simple makefile rules using
ocamlfind. In order to do that, we introduce a new top-level file that
calls the coqtop main entry.
This is very convenient in order to use other builds systems such as
`ocamlbuild` or `jbuilder`.
An additional consideration is that we must perform a side-effect on
init depending on whether we have an OCaml toplevel available [byte]
or not. We do that by using two different object files, one for the
bytecode version other for the native one, but we may want to review
our choice.
We also perform some smaller cleanups taking profit from ocamlfind.
|
|/ |
|
| |
|
|
|
|
|
|
|
|
|
| |
- timing needs time and python
- check for compiled files without source looks in the install
directory (except for make -f Makefile.ci which doesn't check), as
such the install directory has been renamed to _install_ci and isn't
searched.
|
|
|
|
|
|
|
|
| |
This is crucial for the search for binary files without known
sources (commit 7d1fc1), since coq_makefile currently doesn't
install in user-contrib the ML sources of plugins.
This could also helps for variables such as $(EXISTINGML).
|
|
|
|
|
|
| |
This is a followup of 7d1fc15. Without this fix, you're warned of leftover files,
but even a 'make clean' is then refused, so you cannot get rid of them easily
(apart via a git clean -xfd).
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This commit adds timing scripts from
https://github.com/JasonGross/coq-scripts/tree/master/timing into the
tools folder, and integrates them into coq_makefile and Coq's makefile.
The main added makefile targets are:
- the `TIMING` variable - when non-empty, this creates for each built
`.v` file a `.v.timing` variable (or `.v.before-timing` or
`.v.after-timing` for `TIMING=before` and `TIMING=after`, respectively)
- `pretty-timed TGTS=...` - runs `make $(TGTS)` and prints a table of
sorted timings at the end, saving it to `time-of-build-pretty.log`
- `make-pretty-timed-before TGTS=...`, `make-pretty-timed-after
TGTS=...` - runs `make $(TGTS)`, and saves the timing data to the file
`time-of-build-before.log` or `time-of-build-after.log`, respectively
- `print-pretty-timed-diff` - prints a table with the difference between
the logs recorded by `make-pretty-timed-before` and
`make-pretty-timed-after`, saving the table to
`time-of-build-both.log`
- `print-pretty-single-time-diff BEFORE=... AFTER=...` - this prints a
table with the differences between two `.v.timing` files, and saves
the output to `time-of-build-pretty.log`
- `*.v.timing.diff` - this saves the result of
`print-pretty-single-time-diff` for each target to the
`.v.timing.diff` file
- `all.timing.diff` (`world.timing.diff` and `coq.timing.diff` in Coq's
own Makefile) - makes all `*.v.timing.diff` targets
N.B. We need to make `make pretty-timed` fail if `make` fails. To do
this, we need to get around the fact that pipes swallow exit codes.
There are a few solutions in
https://stackoverflow.com/questions/23079651/equivalent-of-pipefail-in-gnu-make;
we choose the temporary file rather than requiring the shell of the
makefile to be bash.
|
|/
|
|
|
|
|
|
|
|
|
| |
This should help preventing weird compilation failures due to leftover
object files after deleting or moving some source files
By the way:
- use plain $(filter-out ...) instead of a 'diff' macro (thanks Jason
for the suggestion)
- rename FIND_VCS_CLAUSE into FIND_SKIP_DIRS since it contains more
than version control stuff nowadays
|
|
|
|
|
|
|
| |
of this file
There is now a warning if the content of micromega.ml isn't what MExtraction.v would
produce.
|
|
|
|
|
|
| |
See now https://github.com/coq/bignums
Int31 is still in the stdlib.
Some proofs there has be adapted to avoid the need for BigNumPrelude.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On a machine for which ocamlopt is available, the make world will now
perform bytecode compilation only in grammar/ (up to the syntax
extension grammar.cma), and then exclusively use ocamlopt.
In particular, make world do not build bin/coqtop.byte.
A separate rule 'make byte' does it, as well as bytecode plugins and
things like dev/printers.cma.
'make install' deals only with the part built by 'make', while a new
rule 'make install-byte' installs the part built by 'make byte'.
IMPORTANT: PLEASE AVOID doing things like 'make -j world byte' or any
parallel mix of native and byte rules. These are known to crash sometimes,
see below. Instead, do rather 'make -j && make -j byte'.
Indeed, apart from marginal compilation speed-up for users not interested
in byte versions, the main reason for this commit is to discourage any
simultaneous use of OCaml native and byte compilers. Indeed, ocamlopt and
ocamlc will both happily destroy and recreate .cmi for .ml files with no .mli,
and in case of parallel build this may happen at the very moment another
ocaml(c|opt) is accessing this .cmi. Until now, this issue has been
handled via nasty hacks (see the former MLWITHOUTMLI and HACKMLI vars in
Makefile.build). But these hacks weren't obvious to extend to ocamlopt
-pack vs. ocamlopt -pack.
coqdep_boot takes a "-dyndep" option to control precisely how a Declare ML
Module influences the .v.d dependency file. Possible values are:
-dyndep opt : regular situation now, depends only on .cmxs
-dyndep byte : no ocamlopt, or compilation forced to bytecode, depends on .cm(o|a)
-dyndep both : earlier behavior, dependency over both .cm(o|a) and .cmxs
-dyndep none : interesting for coqtop with statically linked plugins
-dyndep var : place Makefile variables $(DYNLIB) and $(DYNOBJ) in .v.d
instead of extensions .cm*, so that the choice is made in the rest of the
makefile (see a future commit about coq_makefile)
NB: two extra mli added to avoid building unecessary .cmo during 'make world',
without having to use the ocamldep -native option.
NB: we should state somewhere that coqmktop -top won't work unless
'make byte' was done first
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
- Improve the setup to support external contribs.
We use a more minimalistic Coq build, gaining a few extra minutes.
- [math-comp] workaround `make -j` bug to enable parallel building.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Some C files included in build scripts (in dev/build) were triggering
errors or warnings on non-win32 platforms.
Note that ide/ide_win32_stubs.c was already handled through an ad-hoc
rule in Makefile.
If you add a new C file outside of kernel/byterun, please extend the CFILES
variable.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This file was only used during ocamldebug sessions (in the dev/db
script). It was containing a large subset of the core cma files,
up to the printing functions. There were a few notable exceptions,
for instance no kernel/vm.cmo to avoid loading dllcoqrun.so in ocamldebug.
But printers.cma was troublesome to maintain : almost each time an ML file
was added/removed/renamed in the core of Coq, dev/printers.mllib
had to be edited, in addition to the directory-specific .mllib
(kernel/kernel.mllib and co). So I propose here to kill this file,
and put instead in dev/db several "load_printer" of the core cma files.
For that to work, we need to compile kernel/kernel.cma with the right
-dllib and -dllpath options, but that shouldn't hurt (on the contrary).
We also source now the camlpX cma in dev/db, via a new generated file
dev/camlp4.dbg containing a load_printer of either gramlib.cma or
camp4lib.cma.
If one doesn't want to perform the whole "source db" at the start
of an ocamldebug session, then the former "load_printer printers.cma"
could be replaced by:
source core.dbg
load_printer top_printers.cmo
See for instance the minimal dev/base_db.
|
|
|
|
|
|
|
| |
- With the ?= construction, we avoid warnings about undefined variables,
while tolerating both 'make VERBOSE=1' and 'VERBOSE=1 make'
- Some extra documentation and cleanup
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
This reverts commit b2f8f9edd5c1bb0a9c8c4f4b049381b979d3e385, reversing
changes made to da99355b4d6de31aec5a660f7afe100190a8e683.
Hugo asked for more discussion on this topic, and it was not in the roadmap. I
merged it prematurely because I thought there was a consensus. Also, I missed
that it was changing coq_makefile. Sorry about that.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On a machine for which ocamlopt is available, the make world will now
perform bytecode compilation only in grammar/ (up to the syntax
extension grammar.cma), and then exclusively use ocamlopt.
In particular, make world do not build bin/coqtop.byte.
A separate rule 'make byte' does it, as well as bytecode plugins and
things like dev/printers.cma.
'make install' deals only with the part built by 'make', while a new
rule 'make install-byte' installs the part built by 'make byte'.
IMPORTANT: PLEASE AVOID doing things like 'make -j world byte' or any
parallel mix of native and byte rules. These are known to crash sometimes,
see below. Instead, do rather 'make -j && make -j byte'.
Indeed, apart from marginal compilation speed-up for users not interested
in byte versions, the main reason for this commit is to discourage any
simultaneous use of OCaml native and byte compilers. Indeed, ocamlopt and
ocamlc will both happily destroy and recreate .cmi for .ml files with no .mli,
and in case of parallel build this may happen at the very moment another
ocaml(c|opt) is accessing this .cmi. Until now, this issue has been
handled via nasty hacks (see the former MLWITHOUTMLI and HACKMLI vars in
Makefile.build). But these hacks weren't obvious to extend to ocamlopt
-pack vs. ocamlopt -pack.
coqdep_boot takes a "-dyndep" option to control precisely how a Declare ML
Module influences the .v.d dependency file. Possible values are:
-dyndep opt : regular situation now, depends only on .cmxs
-dyndep byte : no ocamlopt, or compilation forced to bytecode, depends on .cm(o|a)
-dyndep both : earlier behavior, dependency over both .cm(o|a) and .cmxs
-dyndep none : interesting for coqtop with statically linked plugins
-dyndep var : place Makefile variables $(DYNLIB) and $(DYNOBJ) in .v.d
instead of extensions .cm*, so that the choice is made in the rest of the
makefile (see next commit about coq_makedile)
NB: two extra mli added to avoid building unecessary .cmo during 'make world',
without having to use the ocamldep -native option.
NB: we should state somewhere that coqmktop -top won't work unless
'make byte' was done first
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For now, the pack name reuse the previous .cma name of the plugin,
(extraction_plugin, etc).
The earlier .mllib files in plugins are now named .mlpack.
They are also handled by bin/ocamllibdep, just as .mllib.
We've slightly modified ocamllibdep to help setting the -for-pack
options: in *.mlpack.d files, there are some extra variables such as
foo/bar_FORPACK := -for-pack Baz
when foo/bar.ml is mentioned in baz.mlpack.
When a plugin is calling a function from another plugin, the name
need to be qualified (Foo_plugin.Bar.baz instead of Bar.baz).
Btw, we discard the generated files plugins/*/*_mod.ml, they are
obsolete now, replaced by DECLARE PLUGIN.
Nota: there's a potential problem in the micromega directory,
some .ml files are linked both in micromega_plugin and in csdpcert.
And we now compile these files with a -for-pack, even if they are
not packed in the case of csdpcert. In practice, csdpcert seems
to work well, but we should verify with OCaml experts.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
General idea : Makefile.build was far too big to be easy to grasp or
maintain, with information scattered everywhere. Let's try to tidy that!
Normally, this commit is transparent for the user. We simply regroup
some parts of Makefile.build in several new dedicated files:
- Makefile.ide
- Makefile.checker
- Makefile.dev (for printers, revision, extra partial targets, otags)
- Makefile.install
These new files are "included" at the start of Makefile.build, to provide
the same behavior as before, but with a Makefile.build shrinked by 50%
(to approx 600 lines). Makefile.build now handles in priority the build
of coqtop, minor tools, theories and plugins.
Note: this is *not* a separate build system for coqchk nor coqide,
even if this can be seen as a first step in this direction (won't be easy
anyway to continue, due to the sharing of various stuff in lib and more).
In particular Makefile.{coqchk,ide} may rely here and there on some generic
rules left in Mafefile.build. Conversely, be sure to prefix rules in
Makefile.{coqchk,ide} by checker/... or ide/... in order to avoid
interferences with generic rules.
Makefile.common is still there, but quite simplified. For instance,
some variables that were used only once (e.g. lists of cmo files to link
in the various tools) are now defined in Makefile.build, directly
where they're needed. THEORIESVO and PLUGINSVO are made directly out of
the theories/*/vo.itarget and plugins/*/vo.itarget files, no long manual
list of subdirs anymore. Specific sub-targets such as 'reals' still
exist, but in Makefile.dev, and they aren't mandatory.
Makefile.doc is augmented by the rules building the documentation of
the sources via ocamldoc.
This classification attempt could probably be improved. For instance,
the install rules for coqide are currently in Makefile.ide, but could
also go in Makefile.install. Note that I've removed install-library-light
which was broken anyway (arith isn't self-contained anymore).
|
|
|
|
|
|
| |
More precisely, we first remove *.native, *.cm*, *.o, which should
normally consistute the only content of these .coq-native directories,
and then remove these directories if they're indeed empty
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We're back to a unique build phase (as before e372b72), but without
relying on the awkward include-deps-failed-lets-retry feature of make.
Since PMP has made grammar/ self-contained, we could now build
grammar.cma in a rather straightforward way, no need for
a specific sub-call to $(MAKE) for that. The dependencies between
files of grammar/ are stated explicitely, since .d files aren't
fully available initially.
Some Makefile simplifications, for instance remove the CAMLP4DEPS
shell horror. Instead, we generalize the use of two different
filename extensions :
- a .mlp do not need grammar.cma (they are in grammar/ and tools/compat5*.mlp)
- a .ml4 is now always preprocessed with grammar.cma (and q_constr.cmo),
except coqide_main.ml4 and its specific rule
Note that we do not generate .ml4.d anymore (thanks to the .mlp vs.
.ml4 dichotomy)
|
|\ |
|
| | |
|
|/
|
|
|
| |
Nothing is done for camlp4
There is an issue with computing camlbindir
|
| |
|
| |
|
|
|
|
| |
The created bundle contains only coqide and gtk (no coqtop, no stdlib)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add [Polymorphic] and [Monomorphic] local flag for definitions as well as
[Set Universe Polymorphism] global flag to make all following definitions
polymorphic. Mainly syntax for now.
First part of the big changes to the kernel:
- Const, Ind, Construct now come with a universe level instance
- It is used for type inference in the kernel, which now also takes
a graph as input: actually a set of local universe variables and their
constraints. Type inference just checks that the constraints are enough
to satisfy its own rules.
- Remove polymorphic_arity and _knowing_parameters everywhere: we
don't need full applications for polymorphism to apply anymore, as
we generate fresh variables at each constant/inductive/constructor
application. However knowing_parameters variants might be reinstated
later for optimization.
- New structures exported in univ.mli:
- universe_list for universe level instances
- universe_context(_set) for the local universe constraints, also
recording which variables will be local and hence generalized after
inference if defining a polymorphic ind/constant.
- this patch makes coq stop compiling at indtypes.ml
Adapt kernel, library, pretyping, tactics and toplevel to universe polymorphism.
Various degrees of integration, places where I was not sure what to do or
just postponed bigger reorganizations of the code are marked with FIXMEs.
Main changes:
- Kernel now checks constraints and does not infer them anymore.
- The inference functions produce a context of constraints that were checked
during inference, useful to do double-checking of the univ. poly. code
but might be removed later.
- Constant, Inductive entries now have a universe context (local variables
and constraints) associated to them.
- Printing, debugging functions for the new structures are also implemented.
- Now stopping at Logic.v
- Lots of new code in kernel/univ.ml that should be reviewed.
- kernel/indtypes probably does not do what's right when inferring inductive
type constraints.
- Adapted evd to use the new universe context structure.
- Did not deal with unification/evar_conv.
- Add externalisation code for universe level instances.
- Support for polymorphism in pretyping/command and proofs/proofview etc.
Needed wrapping of [fresh_.._instance] through the evar_map, which
contains the local state of universes during type-checking.
- Correct the inductive scheme generation to support polymorphism as well.
- Have to review kernel code for correctness, and especially rework the
computation of universe constraints for inductives.
Stops somewhat later in Logic.v
- Fix naming of local/toplevel universes to be correctly done at typechecking time:
local variables have no dirpath.
- Add code to do substitution of universes in modules, not finished yet.
- Move fresh_* functions out of kernel, it won't ever build a universe level again!
- Adapt a lot of new_Type to use the correct dirpath and declare the new types in the evar_map
so we keep track of them.
- A bit of code factorization (evd_comb moved, pretype_global).
- Refactor more code
- Adapt plugins code (sometimes wrong, marked with FIXME)
- Fix cases generating unneeded universe (not sure it's ok though)
- Fix scheme generation for good, might have opportunity to cleanup
the terms later.
Init compiles now (which means rewrite, inversion, elim etc.. work as well).
- Unsolved issue of pretyping to lower sorts properly (to Prop for example).
This has to do with the (Retyping.get_type_of) giving algebraic universes that
would appear on the right of constraints.
This makes checking for dangling universes at the end of pretyping fail,
hence the check in kernel/univ was removed. It should come back when we have
a fix for this.
- Correctly (?) compute the levels of inductive types.
Removed old code pertaining to universe polymorphism. Note that we generate
constraint variables for the conclusion of inductive types invariably.
- Shrink constraints before going to the kernel, combine substitution of the
smaller universe set with normalization of evars (maybe not done everywhere,
only ordinary inductives, definitions and proofs)
- More API reworks overall. tclPUSHCONTEXT can be used to add fresh universes
to the proof goal (used in a few places to get the right instance.
- Quick fix for auto that won't work in the long run. It should always have been
restricted to take constant references as input, without any loss of generality
over constrs.
Fix some plugins and insertion of non-polymorphic constants in a module.
Now stops in relation classes.
Cleanup and move code from kernel to library and from pretyping to library too.
Now there is a unique universe counter declared in library/universes.ml along
with all the functions to generate new universes and get fresh constant/inductive
terms.
- Various function renamings
- One important change in kernel/univ.ml: now [sup] can be applied to Prop.
- Adapt records/classes to universe polymorphism
- Now stops in EqDepFacts due to imprecise universe polymorphism.
Forgot to git add those files.
interp_constr returns the universe context
The context is then pushed through the environment (or proof goal
sigma).
- Fix insertion of constants/inductives in env, pushing constraints to
the global env for non-polymorphic ones.
- Add Prop as a universe level to do proper type inference with sorts.
It is allowed to take [sup] of [Prop] now.
- New nf_evar based on new Evd.map(_undefined)
- In proofs/logic.ml: conv_leq_goal might create some constraints that
are now recorded.
- Adapt Program code to universes.
Merge with latest trunk + fixes
-Use new constr_of_global from universes
- fix eqschemes to use polymorphic universes
- begin fixing cctac but f_equal still fails
- fix [simpl] and rest of tacred
- all the eq_constr with mkConst foo should be fixed as well, only
partially done
- Fix term hashing function to recognize equal terms up to universe instances.
- Fix congruence closure to equate terms that differ only in universe instances,
these will be resolved by constraints.
Add a set of undefined universe variables to unification.
Universe variables can now be declared rigid or flexible (unifiable).
Flexible variables are resolved at the end of typechecking by instantiating
them to their glb, adding upper bound constraints associated to them.
Also:
- Add polymorphic flag for inductives.
- Fix cooking partially
- Fix kernel/univ.ml to do normalization of universe expressions at
the end of substitution.
Correct classes/structures universe inference
- Required a bit of extension in Univ to handle Max properly (sup u
(u+1)) was returning (max(u,u+1)) for example.
- Try a version where substitution of universe expressions for universe
levels is allowed at the end of unification. By an invariant this
should only instantiate with max() types that are morally "on the
right" only.
This is controlled using a rigidity attribute of universe variables,
also allowing to properly do unification w.r.t. universes during
typechecking/inference.
- Currently fails in Vectors/Fin.v because case compilation generates
"flexible" universes that actually appear in the term...
Fix unification of universe variables.
- Fix choice of canonical universe in presence of universe constraints,
and do so by relying on a trichotomy for universe variables: rigid
(won't be substituted), flexible (might be if not substituted by an
algebraic) and flexible_alg (always substituted).
- Fix romega code and a few more plugins, most of the standard library
goes through now.
- Had to define some inductives as Polymorphic explicitly to make
proofs go through, more to come, and definitions should be polymorphic
too, otherwise inconsistencies appear quickly (two uses of the same
polymorphic ind through monomorphic functions (like nth on lists of
Props and nats) will fix the monomorphic function's universe with eq
constraints that are incompatible).
- Correct universe polymorphism handling for fixpoint/cofixpoint
definitions.
- Fix romega to use the right universes for list constructors.
- Fix internalization/externalization to deal properly with the
implicit parsing of params.
- Fix fourier tactic w.r.t. GRefs
- Fix substitution saturation of universes.
- Fix number syntax plugin.
- Fix setoid_ring to take its coefficients in a Set rather
than a Type, avoiding a large number of useless universe constraints.
- Fix minor checker decl
- Fix btauto w.r.t. GRef
- Fix proofview to normalize universes in the original types as well.
- Fix definitions of projections to not take two universes at the same level,
but at different levels instead, avoiding unnecessary constraints that could
lower the level of one component depending on the use of the other component.
Fix simpl fst, snd to use @fst @snd as they have maximal implicits now.
- More simpl snd, fst fixes.
- Try to make the nth theory of lists polymorphic.
Check with Enrico if this change is ok. Case appearing in RingMicromega's call
to congruence l417, through a call to refine -> the_conv_x_leq.
Compile everything.
- "Fix" checker by deactivating code related to polymorphism, should
be updated.
- Make most of List.v polymorphic to help with following definitions.
- When starting a lemma, normalize w.r.t. universes, so that the types
get a fixed universe, not refinable later.
- In record, don't assign a fully flexible universe variable to the record
type if it is a definitional typeclass, as translate_constant doesn't expect
an algebraic universe in the type of a constant. It certainly should though.
- Fix micromega code.
Fix after rebase.
Update printing functions to print the polymorphic status of definitions
and their universe context.
Refine printing of universe contexts
- Fix printer for universe constraints
- Rework normalization of constraints to separate the Union-Find result
from computation of lubs/glbs.
Keep universe contexts of inductives/constants in entries for correct
substitution inside modules. Abstract interface to get an instantiation
of an inductive with its universe substitution in the kernel (no
substitution if the inductive is not polymorphic, even if mind_universes
is non-empty).
Make fst and snd polymorphic, fix instances in RelationPairs to use
different universes for the two elements of a pair.
- Fix bug in nf_constraints: was removing Set <= constraints, but should
remove Prop <= constraints only.
- Make proj1_sig, projT1... polymorphic to avoid weird universe unifications,
giving rise to universe inconsistenties.
Adapt auto hints to polymorphic references.
Really produce polymorphic hints... second try
- Remove algebraic universes that can't appear in the goal when taking the
type of a lemma to start.
Proper handling of universe contexts in clenv and auto so that
polymorphic hints are really refreshed at each application.
Fix erroneous shadowing of sigma variable.
- Make apparent the universe context used in pretyping, including information
about flexibility of universe variables.
- Fix induction to generate a fresh constant instance with flexible universe variables.
Add function to do conversion w.r.t. an evar map and its local universes.
- Fix define_evar_as_sort to not forget constraints coming from the refinement.
- Do not nf_constraints while we don't have the whole term at hand to substitute in.
- Move substitution of full universes to Universes
- Normalize universes inside an evar_map when doing nf_evar_map_universes.
- Normalize universes at each call to interp_ltac (potentially expensive)
Do not normalize all evars at each call to interp_gen in tactics: rather
incrementally normalize the terms at hand, supposing the normalization of universes
will concern only those appearing in it (dangerous but much more efficient).
Do not needlessly generate new universes constraints for projections of records.
Correct polymorphic discharge of section variables.
Fix autorewrite w.r.t. universes: polymorphic rewrite hints get fresh universe
instances at each application.
Fix r2l rewrite scheme to support universe polymorphism
Fix a bug in l2r_forward scheme and fix congruence scheme to handle polymorphism correctly.
Second try at fixing autorewrite, cannot do without pushing the constraints and the set of fresh
universe variables into the proof context.
- tclPUSHCONTEXT allow to set the ctx universe variables as flexible or rigid
- Fix bug in elimschemes, not taking the right sigma
Wrong sigma used in leibniz_rewrite
Avoid recomputation of bounds for equal universes in normalization of constraints,
only the canonical one need to be computed.
Make coercions work with universe polymorphic projections.
Fix eronneous bound in universes constraint solving.
Make kernel reduction and term comparison strictly aware of universe instances,
with variants for relaxed comparison that output constraints.
Otherwise some constraints that should appear during pretyping don't and we generate
unnecessary constraints/universe variables.
Have to adapt a few tactics to this new behavior by making them universe aware.
- Fix elimschemes to minimize universe variables
- Fix coercions to not forget the universe constraints generated by an application
- Change universe substitutions to maps instead of assoc lists.
- Fix absurd tactic to handle univs properly
- Make length and app polymorphic in List, unification sets their levels otherwise.
Move to modules for namespace management instead of long names in universe code.
More putting things into modules.
Change evar_map structure to support an incremental substitution of universes
(populated from Eq constraints), allowing safe and fast inference of precise levels,
without computing lubs.
- Add many printers and reorganize code
- Extend nf_evar to normalize universe variables according to the substitution.
- Fix ChoiceFacts.v in Logic, no universe inconsistencies anymore. But Diaconescu
still has one (something fixes a universe to Set).
- Adapt omega, functional induction to the changes.
Fix congruence, eq_constr implem, discharge of polymorphic inductives.
Fix merge in auto.
The [-parameters-matter] option (formerly relevant_equality).
Add -parameters-matter to coqc
Do compute the param levels at elaboration time if parameters_matter.
- Fix generalize tactic
- add ppuniverse_subst
- Start fixing normalize_universe_context w.r.t. normalize_univ_variables.
- Fix HUGE bug in Ltac interpretation not folding the sigma correctly if interpreting a tactic application
to multiple arguments.
- Fix bug in union of universe substitution.
- rename parameters-matter to indices-matter
- Fix computation of levels from indices not parameters.
- Fixing parsing so that [Polymorphic] can be applied to gallina extensions.
- When elaborating definitions, make the universes from the type rigid when
checking the term: they should stay abstracted.
- Fix typeclasses eauto's handling of universes for exact hints.
Rework all the code for infering the levels of inductives and checking their
allowed eliminations sorts.
This is based on the computation of a natural level for an inductive type I.
The natural level [nat] of [I : args -> sort := c1 : A1 -> I t1 .. cn : An -> I tn] is
computed by taking the max of the levels of the args (if indices matter) and the
levels of the constructor arguments.
The declared level [decl] of I is [sort], which might be Prop, Set or some Type u (u fresh
or not).
If [decl >= nat && not (decl = Prop && n >= 2)], the level of the inductive is [decl],
otherwise, _smashing_ occured.
If [decl] is impredicative (Prop or Set when Set is impredicative), we accept the
declared level, otherwise it's an error.
To compute the allowed elimination sorts, we have the following situations:
- No smashing occured: all sorts are allowed. (Recall props that are not
smashed are Empty/Unitary props)
- Some smashing occured:
- if [decl] is Type, we allow all eliminations (above or below [decl],
not sure why this is justified in general).
- if [decl] is Set, we used smashing for impredicativity, so only
small sorts are allowed (Prop, Set).
- if [decl] is Prop, only logical sorts are allowed: I has either
large universes inside it or more than 1 constructor.
This does not treat the case where only a Set appeared in I which
was previously accepted it seems.
All the standard library works with these changes. Still have to cleanup
kernel/indtypes.ml. It is a good time to have a whiskey with OJ.
Thanks to Peter Lumsdaine for bug reporting:
- fix externalisation of universe instances (still appearing when no Printing Universes)
- add [convert] and [convert_leq] tactics that keep track of evars and universe constraints.
- use them in [exact_check].
Fix odd behavior in inductive type declarations allowing to silently lower a Type i parameter
to Set for squashing a naturally Type i inductive to Set. Reinstate the LargeNonPropInductiveNotInType
exception.
Fix the is_small function not dealing properly with aliases of Prop/Set in Type.
Add check_leq in Evd and use it to decide if we're trying to squash an
inductive naturally in some Type to Set.
- Fix handling of universe polymorphism in typeclasses Class/Instance declarations.
- Don't allow lowering a rigid Type universe to Set silently.
- Move Ring/Field back to Type. It was silently putting R in Set due to the definition of ring_morph.
- Rework inference of universe levels for inductive definitions.
- Make fold_left/right polymorphic on both levels A and B (the list's type). They don't have to be
at the same level.
Handle selective Polymorphic/Monomorphic flag right for records.
Remove leftover command
Fix after update with latest trunk.
Backport patches on HoTT/coq to rebased version of universe polymorphism.
- Fix autorewrite wrong handling of universe-polymorphic rewrite rules. Fixes part of issue #7.
- Fix the [eq_constr_univs] and add an [leq_constr_univs] to avoid eager equation of universe
levels that could just be inequal. Use it during kernel conversion. Fixes issue #6.
- Fix a bug in unification that was failing too early if a choice in unification of universes
raised an inconsistency.
- While normalizing universes, remove Prop in the le part of Max expressions.
- Stop rigidifying the universes on the right hand side of a : in definitions.
- Now Hints can be declared polymorphic or not. In the first case they
must be "refreshed" (undefined universes are renamed) at each application.
- Have to refresh the set of universe variables associated to a hint
when it can be used multiple times in a single proof to avoid fixing
a level... A better & less expensive solution should exist.
- Do not include the levels of let-ins as part of records levels.
- Fix a NotConvertible uncaught exception to raise a more informative
error message.
- Better substitution of algebraics in algebraics (for universe variables that
can be algebraics).
- Fix issue #2, Context was not properly normalizing the universe context.
- Fix issue with typeclasses that were not catching UniverseInconsistencies
raised by unification, resulting in early failure of proof-search.
- Let the result type of definitional classes be an algebraic.
- Adapt coercions to universe polymorphic flag (Identity Coercion etc..)
- Move away a dangerous call in autoinstance that added constraints for every
polymorphic definitions once in the environment for no use.
Forgot one part of the last patch on coercions.
- Adapt auto/eauto to polymorphic hints as well.
- Factor out the function to refresh a clenv w.r.t. undefined universes.
Use leq_univ_poly in evarconv to avoid fixing universes.
Disallow polymorphic hints based on a constr as it is not possible to infer their universe
context. Only global references can be made polymorphic. Fixes issue #8.
Fix SearchAbout bug (issue #10).
Fix program w.r.t. universes: the universe context of a definition changes
according to the successive refinements due to typechecking obligations.
This requires the Proof modules to return the generated universe substitution
when finishing a proof, and this information is passed in the closing hook.
The interface is not very clean, will certainly change in the future.
- Better treatment of polymorphic hints in auto: terms can be polymorphic now, we refresh their
context as well.
- Needs a little change in test-pattern that seems breaks multiary uses of destruct in NZDiv.v, l495.
FIX to do.
Fix [make_pattern_test] to keep the universe information around and still
allow tactics to take multiple patterns at once.
- Fix printing of universe instances that should not be factorized blindly
- Fix handling of the universe context in program definitions by allowing the
hook at the end of an interactive proof to give back the refined universe context,
before it is transformed in the kernel.
- Fix a bug in evarconv where solve_evar_evar was not checking types of instances,
resulting in a loss of constraints in unification of universes and a growing number
of useless parametric universes.
- Move from universe_level_subst to universe_subst everywhere.
- Changed representation of universes for a canonical one
- Adapt the code so that universe variables might be substituted by
arbitrary universes (including algebraics). Not used yet except for
polymorphic universe variables instances.
- Adapt code to new constraint structure.
- Fix setoid rewrite handling of evars that was forgetting the initial
universe substitution !
- Fix code that was just testing conversion instead of keeping the
resulting universe constraints around in the proof engine.
- Make a version of reduction/fconv that deals with the more general
set of universe constraints.
- [auto using] should use polymorphic versions of the constants.
- When starting a proof, don't forget about the algebraic universes in
the universe context.
Rationalize substitution and normalization functions for universes.
Also change back the structure of universes to avoid considering levels
n+k as pure levels: they are universe expressions like max.
Everything is factored out in the Universes and Univ modules now and
the normalization functions can be efficient in the sense that they
can cache the normalized universes incrementally.
- Adapt normalize_context code to new normalization/substitution functions.
- Set more things to be polymorphic, e.g. in Ring or SetoidList for the rest
of the code to work properly while the constraint generation code is not adapted.
And temporarily extend the universe constraint code in univ to solve max(is) = max(js)
by first-order unification (these constraints should actually be implied not enforced).
- Fix romega plugin to use the right universes for polymorphic lists.
- Fix auto not refreshing the poly hints correctly.
- Proper postponing of universe constraints during unification, avoid making
arbitrary choices.
- Fix nf_evars_and* to keep the substitution around for later normalizations.
- Do add simplified universe constraints coming from unification during typechecking.
- Fix solve_by_tac in obligations to handle universes right, and the corresponding
substitution function.
Test global universe equality early during simplication of constraints.
Better hashconsing, but still not good on universe lists.
- Add postponing of "lub" constraints that should not be checked early,
they are implied by the others.
- Fix constructor tactic to use a fresh constructor instance avoiding
fixing universes.
- Use [eq_constr_universes] instead of [eq_constr_univs] everywhere,
this is the comparison function that doesn't care about the universe
instances.
- Almost all the library compiles in this new setting, but some more tactics
need to be adapted.
- Reinstate hconsing.
- Keep Prop <= u constraints that can be used to set the level of a universe
metavariable.
Add better hashconsing and unionfind in normalisation of constraints.
Fix a few problems in choose_canonical, normalization and substitution functions.
Fix after merge
Fixes after rebase with latest Coq trunk, everything compiles again,
albeit slowly in some cases.
- Fix module substitution and comparison of table keys in conversion
using the wrong order (should always be UserOrd now)
- Cleanup in universes, removing commented code.
- Fix normalization of universe context which was assigning global
levels to local ones. Should always be the other way!
- Fix universe implementation to implement sorted cons of universes
preserving order. Makes Univ.sup correct again, keeping universe in
normalized form.
- In evarconv.ml, allow again a Fix to appear as head of a weak-head normal
form (due to partially applied fixpoints).
- Catch anomalies of conversion as errors in reductionops.ml, sad but
necessary as eta-expansion might build ill-typed stacks like FProd,
[shift;app Rel 1], as it expands not only if the other side is rigid.
- Fix module substitution bug in auto.ml
- Fix case compilation: impossible cases compilation was generating useless universe
levels. Use an IDProp constant instead of the polymorphic identity to not influence
the level of the original type when building the case construct for the return type.
- Simplify normalization of universe constraints.
- Compute constructor levels of records correctly.
Fall back to levels for universe instances, avoiding issues of unification.
Add more to the test-suite for universe polymorphism.
Fix after rebase with trunk
Fix substitution of universes inside fields/params of records to be made
after all normalization is done and the level of the record has been
computed.
Proper sharing of lower bounds with fixed universes.
Conflicts:
library/universes.ml
library/universes.mli
Constraints were not enforced in compilation of cases
Fix after rebase with trunk
- Canonical projections up to universes
- Fix computation of class/record universe levels to allow
squashing to Prop/Set in impredicative set mode.
- Fix descend_in_conjunctions to properly instantiate projections with universes
- Avoid Context-bound variables taking extra universes in their associated universe context.
- Fix evar_define using the wrong direction when refreshing a universe under cumulativity
- Do not instantiate a local universe with some lower bound to a global one just because
they have the same local glb (they might not have the same one globally).
- Was loosing some global constraints during normalization (brought again by the kernel), fixed now.
- Proper [abstract] with polymorphic lemmas (polymorphic if the current proof is).
- Fix silly bug in autorewrite: any hint after the first one was always monomorphic.
- Fix fourier after rebase
- Refresh universes when checking types of metas in unification (avoid (sup (sup univ))).
- Speedup a script in FSetPositive.v
Rework definitions in RelationClasses and Morphisms to share universe
levels as much as possible. This factorizes many useless x <=
RelationClasses.foo constraints in code that uses setoid rewriting.
Slight incompatible change in the implicits for Reflexivity and
Irreflexivity as well.
- Share even more universes in Morphisms using a let.
- Use splay_prod instead of splay_prod_assum which doesn't reduce let's
to find a relation in setoid_rewrite
- Fix [Declare Instance] not properly dealing with let's in typeclass contexts.
Fixes in inductiveops, evarutil.
Patch by Yves Bertot to allow naming universes in inductive definitions.
Fixes in tacinterp not propagating evars correctly.
Fix for issue #27: lowering a Type to Prop is allowed during
inference (resulting in a Type (* Set *)) but kernel reduction
was wrongly refusing the equation [Type (*Set*) = Set].
Fix in interface of canonical structures: an instantiated polymorphic
projection is not needed to lookup a structure, just the projection name
is enough (reported by C. Cohen).
Move from universe inference to universe checking in the kernel.
All tactics have to be adapted so that they carry around their generated
constraints (living in their sigma), which is mostly straightforward.
The more important changes are when refering to Coq constants, the
tactics code is adapted so that primitive eq, pairing and sigma types might
be polymorphic.
Fix another few places in tacinterp and evarconv/evarsolve where the sigma
was not folded correctly.
- Fix discharge adding spurious global constraints on polymorphic universe variables
appearing in assumptions.
- Fixes in inductiveops not taking into account universe polymorphic inductives.
WIP on checked universe polymorphism, it is clearly incompatible
with the previous usage of polymorphic inductives + non-polymorphic
definitions on them as universe levels now appear in the inductive type,
and add equality constraints between universes that were otherwise just
in a cumulativity relation (not sure that was actually correct).
Refined version of unification of universe instances for first-order unification,
prefering unfolding to arbitrary identification of universes.
Moved kernel to universe checking only.
Adapt the code to properly infer constraints during typechecking and
refinement (tactics) and only check constraints when adding
constants/inductives to the environment. Exception made of module
subtyping that needs inference of constraints... The kernel conversion
(fconv) has two modes: checking only and inference, the later being used
by modules only. Evarconv/unification make use of a different strategy for
conversion of constants that prefer unfolding to blind unification of
rigid universes. Likewise, conversion checking backtracks on different universe
instances (modulo the constraints).
- adapt congruence/funind/ring plugins to this new mode, forcing them to
declare their constraints.
- To avoid big performance penalty with reification, make ring/field non-polymorphic
(non-linear explosion in run time to be investigated further).
- pattern and change tactics need special treatment: as they are not _reduction_
but conversion functions, their operation requires to update an evar_map with
new universe constraints.
- Fix vm_compute to work better with universes. If the normal
form is made only of constructors then the readback is correct. However a deeper change will
be needed to treat substitution of universe instances when unfolding constants.
Remove libtypes.ml
Fix after merge.
Fix after rebase with trunk.
**** Add projections to the kernel, as optimized implementations of constants.
- New constructor Proj expects a projection constant applied to its principal
inductive argument.
- Reduction machines shortcut the expansion to a case and directly project the
right argument.
- No need to keep parameters as part of the projection's arguments as they
are inferable from the type of the principal argument.
- ML code now compiles, debugging needed.
Start debugging the implementation of projections. Externalisation should
keep the information about projections.
Internalization, pattern-matching, unification and reduction
of projections.
Fix some code that used to have _ for parameters that are no longer
present in projections.
Fixes in unification, reduction, term indexing, auto hints based on projections,
add debug printers.
Fix byte-compilation of projections, unification, congruence with projections.
Adapt .v files using "@proj _ _ record" syntax, should come back on this later.
Fix coercion insertion code to properly deal with projection coercions.
Fix [simpl proj]... TODO [unfold proj], proj is not considered evaluable.
- Fix whnf of projections, now respecting opacity information.
- Fix conversion of projections to try first-order first and then
incrementally unfold them.
- Fix computation of implicit args for projections, simply dropping
the information for parameters.
- Fix a few scripts that relied on projections carrying their parameters (few at's,
rewrites).
- Fix unify_with_subterm to properly match under projections.
- Fix bug in cooking of projections.
- Add pattern PProj for projections.
- A very strange bug appeared in BigZ.v, making coqtop segfault on the export
of BigN... tofix
Fixes after rebase with trunk. Everything compiles now, with efficient
projections.
Fixes after rebase with trunk (esp. reductionops).
Remove warnings, backport patch from old univs+projs branch.
Proper expansion of projections during unification.
They are considered as maybe flexible keys in evarconv/unification. We
try firstorder unification and otherwise expand them as necessary,
completely mimicking the original behavior, when they were
constants. Fix head_constr_bound interface, the arguments are never
needed (they're outside their environment actually). [simpl] and
[red]/[intro] should behave just like before now.
Fix evarconv that was giving up on proj x = ?e problems too early.
- Port patch by Maxime Denes implementing fast projections in the native conversion.
- Backport patch to add eta-expansion for records.
Do not raise an exception but simply fails if trying to do eta on an inductive that is not a record.
Fix projections detyping/matching and unification.ml not always
recovering on first-order universe inequalities.
Correct eta-expansion for records, and change strategy for conversion
with projections to favor reduction over first-order unification a
little more. Fix a bug in Ltac pattern matching on projections.
Fix evars_reset_evd to not recheck existing constraints in case it is just an update
(performance improvement for typeclass resolution).
- Respect Global/Transparent oracle during unification. Opaque means
_never_ unfolded there.
- Add empty universes as well as the initial universes (having Prop < Set).
- Better display of universe inconsistencies.
- Add Beta Ziliani's patch to go fast avoiding imitation when possible.
- Allow instantiation by lower bound even if there are universes above
- (tentative) In refinement, avoid incremental refinement of terms
containing no holes and do it in one step (much faster on big terms).
Turned on only if not a checked command.
Remove dead code in univ/universes.ml and cleanup setup of hashconsing,
for a small speed and memory footprint improvement.
- Fix bug in unification using cumulativity when conversion should have been used.
- Fix unification of evars having type Type, no longer forcing them to be equal
(potentially more constraints): algorithm is now complete w.r.t. cumulativity.
- In clenvtac, use refine_nocheck as we are guaranteed to get well-typed terms
from unification now, including sufficient universe constraints. Small general
speedup.
- Fix inference of universe levels of inductive types to avoid smashing
inadvertently from Set to Prop.
- Fix computation of discharged hypotheses forgetting the arity in inductives.
- Fix wrong order in printing of universe inconsistency explanation
- Allow coercions between two polymorphic instances of the same inductive/constant.
- Do evar normalization and saturation by classes before trying to use program coercion
during pretyping.
- In unification, force equalities of universes when unifying the same rigid head constants.
- Fix omission of projections in constr_leq
- Fix [admit] tactic's handling of normalized universes.
Fix typing of projections not properly normalizing w.r.t. evars, resulting in anomaly sometimes.
Adapt rewrite to work with computational relations (in Type), while
maintaining backward compatibility with Propositional rewriting.
Introduce a [diff] function on evar maps and universe contexts to
properly deal with clause environments. Local hints in auto now store
just the extension of the evar map they rely on, so merging them becomes
efficient. This fixes an important performance issue in auto and typeclass
resolution in presence of a large number of universe constraints.
Change FSetPositive and MSetPositive to put their [elt] and [t] universes in
Type to avoid restricting global universes to [Set]. This is due to [flip]s
polymorphic type being fixed in monomorphic instances of Morphisms.v,
and rewriting hence forcing unification of levels that could be left unrelated.
- Try a fast_typeops implementation of kernel type inference that
allocates less by not rebuilding the term, shows a little performance
improvement, and less allocation.
- Build universe inconsistency explanations lazily, avoiding huge blowup
(x5) in check_constraints/merge_constraints in time and space (these
are stressed in universe polymorphic mode).
- Hashcons universe instances.
Add interface file for fast_typeops
Use monomorphic comparisons, little optimizations of hashconsing and
comparison in univ.ml.
Fix huge slowdown due to building huge error messages. Lazy is not
enough to tame this completely.
Fix last performance issue, due to abstracts building huge terms abstracting on parts of the section
context. Was due to wrong handling of Let... Qed.s in abstract. Performance is a tiny bit better than the
trunk now.
First step at compatibility layer for projections.
Compatibility mode for projections. c.(p), p c use primitive projs,
while @p refers to an expansion [λ params c, c.(p)]. Recovers almost
entire source compatibility with trunk scripts, except when mixing
@p and p and doing syntactic matching (they're unifiable though).
Add a [Set Primitive Projections] flag to set/unset the use of primitive
projections, selectively for each record. Adapt code to handle both the
legacy encoding and the primitive projections. Library is almost
source-to-source compatible, except for syntactic operations relying
on the presence of parameters. In primitive projections mode, @p refers
to an expansion [λ params r. p.(r)]. More information in CHANGES (to be
reformated/moved to reference manual).
Backport changes from HoTT/coq:
- Fix anomaly on uncatched NotASort in retyping.
- Better recognition of evars that are subject to typeclass resolution.
Fixes bug reported by J. Gross on coq-club.
- Print universe polymorphism information for parameters as well.
Fix interface for unsatisfiable constraints error, now a type error.
Try making ring polymorphic again, with a big slowdown, to be investigated.
Fix evar/universe leak in setoid rewrite.
- Add profiling flag
- Move setoid_ring back to non-polymorphic mode to compare perfs with trunk
- Change unification to allow using infer_conv more often (big perf culprit),
but semantics of backtracking on unification of constants is not properly
implemented there.
- Fix is_empty/union_evar_universe_context forgetting about some assignments.
- Performance is now very close to the trunk from june,
with projections deactivated.
|
|
|
|
|
|
|
|
| |
This reverts commit f694544d016b085b3cd10007b9f7716ae2c3b022.
This commit was wrong, since (at least) the highparsing part depends
on the toplevel directory. I still didn't had time to fix that, so
in the meantime let's revert it.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In last commit, we used grep to decide whether a .ml4 could be
compiled during the initial phase of not. Instead, we now rely on
a simpler directory dichotomy:
- config lib kernel library pretyping interp parsing grammar
are considered initial (and shouldn't contain grammar-dependent .ml4),
see $(GRAMSRCDIRS) in Makefile.common
- the grammar-dependent .ml4 could be in any other directories
Currently, they are in: tactics toplevel plugins/*
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Yet another revision of the build system. We avoid relying on the awkward
include-which-fails-but-works-finally-after-a-retry feature of gnu make.
This was working, but was quite hard to understand. Instead, we reuse
the idea of two explicit phases (as in 2007 and its stage{1,2,3}), but
in a lighter way. The main Makefile calls Makefile.build twice :
- first for building grammar.cma (and q_constr.cmo), with a restricted
set of .ml4 to consider (see variables BUILDGRAMMAR and ML4BASEFILES).
- then on the true target(s) asked by the user (using the special variable
MAKECMDGOALS).
In pratice, this should change very little to the concrete developper's life,
let me know otherwise. A few more messages of make due to the explicit
first sub-call, but no more strange "not ready yet" messages...
Btw: we should handle correctly now the parallel compilation of multiple
targets (e.g. make -jX foo bar). As reported by Pierre B, this was
triggering earlier two separate sub-calls to Makefile.build, one
for foo, the other for bar, with possibly nasty interactions in case
of parallelism.
In addition, some cleanup of Makefile.build, removal of useless :: rules,
etc etc.
|
| |
|
| |
|
|
|
|
|
|
|
| |
No need to place these binaries apart, and anyway they aren't
(shell) scripts since ages.
git-svn-id: svn+ssh://scm.gforge.inria.fr/svn/coq/trunk@16432 85f007b7-540e-0410-9357-904b9bb8a0f7
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
native OCaml code.
Warning: the "retroknowledge" mechanism has not been ported to the native
compiler, because integers and persistent arrays will ultimately be defined as
primitive constructions. Until then, computation on numbers may be faster using
the VM, since it takes advantage of machine integers.
git-svn-id: svn+ssh://scm.gforge.inria.fr/svn/coq/trunk@16136 85f007b7-540e-0410-9357-904b9bb8a0f7
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The idea was to allow rebuilding coqtop without the whole stdlib,
but it's not working anymore since the stdlib also depends on
plugins .cmxs, hence its compilation will be triggered anyway.
Since I've no idea how to restore the old behavior (except via
hacking the output of coqdep with more ORDER_ONLY hack), I simply
declare this option dead, and remove it for improving clarity.
NB: an imperfect workaround is to touch all the .vo after rebuilding
coqtop and the plugins...
git-svn-id: svn+ssh://scm.gforge.inria.fr/svn/coq/trunk@15823 85f007b7-540e-0410-9357-904b9bb8a0f7
|
|
|
|
|
|
|
|
| |
For starting a bare coqtop, the recommended option is now "-noinit"
that skips the load of Prelude.vo. Option "-nois" is kept for
compatibility, it is now an alias to "-noinit".
git-svn-id: svn+ssh://scm.gforge.inria.fr/svn/coq/trunk@15753 85f007b7-540e-0410-9357-904b9bb8a0f7
|