| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
We expected `nparams + nrealargs + consnrealargs` but the `nrealargs`
should not be there. This breaks cumulativity of constructors for any
inductive with indices (so records still work, explaining why the test
case in #6747 works).
|
|\ |
|
|\ \ |
|
| |/
|/|
| |
| |
| |
| | |
In Reductionops.infer_conv we did not have enough information to
properly try to unify irrelevant universes. This requires changing the
Reduction.universe_compare type a bit.
|
| |
| |
| |
| |
| |
| | |
- Nothing to check in conversion as they have a common supertype
by typing.
- In inference, enforce that one is lower than the other.
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | | |
Now that the cache is distinct, there should be no nasty side-effects changing
the value of one side while reducing the other.
|
| |/
|/| |
|
|\ \ |
|
| |/
|/| |
|
|\ \ |
|
|\ \ \
| | | |
| | | |
| | | | |
information.
|
|\ \ \ \ |
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This ensures by construction that we never infer constraints outside
the variance model.
|
| |_|_|/
|/| | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Since cumulativity of an inductive type is the universe constraints
which make a term convertible with its universe-renamed copy, the only
constraints we can get are between a universe and its copy.
As such we do not need to be able to represent arbitrary constraints
between universes and copied universes in a double-sized ucontext,
instead we can just keep around an array describing whether a bound
universe is covariant, invariant or irrelevant (CIC has no
contravariant conversion rule).
Printing is fairly obtuse and should be improved: when we print the
CumulativityInfo we add marks to the universes of the instance: = for
invariant, + for covariant and * for irrelevant. ie
Record Foo@{i j k} := { foo : Type@{i} -> Type@{j} }.
Print Foo.
gives
Cumulative Record Foo : Type@{max(i+1, j+1)} := Build_Foo
{ foo : Type@{i} -> Type@{j} }
(* =i +j *k |= *)
|
| | | | |
|
| |/ /
|/| |
| | |
| | |
| | |
| | | |
The part in Reduction should be semantics preserving, but Reductionops
only tried cumulativity if equality fails. This is probably wrong so I
changed it.
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | | |
This fixes the previous patch in rare corner-cases where unification code was
relying on both kernel conversion and specific transparent state.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This definitely qualifies as a micro-optimization, but it would not be
performed by Flambda. Indeed, it is unsound in general w.r.t. OCaml
semantics, as it involves a fixpoint and changes potential non-termination.
In our case it doesn't matter semantically, but it is indeed observable
on computation intensive developments like UniMath.
|
| |/ /
|/| |
| | |
| | |
| | | |
- use Redflags.red_projection
- share unfold_projection between CClosure and Reduction
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This heuristic is justified by the fact that during a conversion check
between a flexible and a rigid term, the flexible one is eventually going
to be fully weak-head normalized. So in this case instead of performing
many small reduction steps on the flexible term, we perform full weak-head
reduction, including delta.
It is slightly more efficient in actual developments, and it fixes a corner
case encountered by Jason Gross.
Fixes #6667: Kernel conversion is much, much slower than `Eval lazy`.
|
| |
| |
| |
| |
| |
| |
| | |
Adding a "let-in"-sensitive function hnf_prod_applist_assum to
instantiate parameters and using it for printing.
Thanks to PMP for reporting.
|
|\ \
| |/
|/| |
|
| | |
|
|/
|
|
|
| |
New module introduced in OCaml 4.05 I think, can create problems when
linking with the OCaml toplevel for `Drop`.
|
|
|
|
|
|
| |
Extending terms is notoriously difficult. We try to get more help from
the compiler by making sure such an extension will trigger non
exhaustive pattern matching warnings.
|
|\ |
|
| |
| |
| |
| | |
We do up to `Term` which is the main bulk of the changes.
|
| |
| |
| |
| | |
This will allow to merge back `Names` with `API.Names`
|
|/
|
|
|
| |
The infos already contain the env. Note that it was only actually used
in the 2 lookup_mind lines.
|
|
|
|
|
| |
This function was lurking around, waiting to bite anybody willing to use it.
We use instead a better API, correct and much less error-prone.
|
|
|
|
|
|
| |
This function breaks the abstraction barrier of abstract universe contexts,
as it provides a way to observe the bound names of such a context. We remove
all the uses that can be easily get rid of with the current API.
|
| |
|
| |
|
|
|
|
|
| |
Only try using cumulativity in conversion/subtyping if the universe
instances are non-empty
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Except I have disabled the minimization of universes after sections as
it seems to interfere with the STM machinery causing files like
test-suite/vio/print.v to loop when processed asynchronously.
This is very peculiar and needs more investigation as the aforementioned
file does not have any sections or any universe polymorphic definitions!
commit fc785326080b9451eb4700b16ccd3f7df214e0ed
Author: Amin Timany <amintimany@gmail.com>
Date: Mon Apr 24 17:14:21 2017 +0200
Revert STL to monomorphic
commit 62b573fb13d290d8fe4c85822da62d3e5e2a6996
Author: Amin Timany <amintimany@gmail.com>
Date: Mon Apr 24 17:02:42 2017 +0200
Try unifying universes before apply subtyping
commit ff393742c37b9241c83498e84c2274967a1a58dc
Author: Amin Timany <amintimany@gmail.com>
Date: Sun Apr 23 13:49:04 2017 +0200
Compile more of STL with universe polymorphism
commit 5c831b41ebd1fc32e2dd976697c8e474f48580d6
Author: Amin Timany <amintimany@gmail.com>
Date: Tue Apr 18 21:26:45 2017 +0200
Made more progress on compiling the standard library
commit b8550ffcce0861794116eb3b12b84e1158c2b4f8
Author: Amin Timany <amintimany@gmail.com>
Date: Sun Apr 16 22:55:19 2017 +0200
Make more number theoretic modules monomorphic
commit 29d126d4d4910683f7e6aada2a25209151e41b10
Author: Amin Timany <amintimany@gmail.com>
Date: Fri Apr 14 16:11:48 2017 +0200
WIP more of standard library compiles
Also: Matthieu fixed a bug in rewrite system which was faulty when
introducing new morphisms (Add Morphism) command.
commit 23bc33b843f098acaba4c63c71c68f79c4641f8c
Author: Amin Timany <amintimany@gmail.com>
Date: Fri Apr 14 11:39:21 2017 +0200
WIP: more of the standard library compiles
We have implemented convertibility of constructors up-to mutual
subtyping of their corresponding inductive types. This is similar to
the behavior of template polymorphism.
commit d0abc5c50d593404fb41b98d588c3843382afd4f
Author: Amin Timany <amintimany@gmail.com>
Date: Wed Apr 12 19:02:39 2017 +0200
WIP: trying to get the standard library compile with universe polymorphism
We are trying to prune universes after section ends. Sections add a
load of universes that are not appearing in the body, type or the
constraints.
|
|
|
|
|
| |
Fall back to the equating levels in case inductive is not fully applied
instead of failing.
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
As per https://github.com/coq/coq/pull/716#issuecomment-305140839
Partially using
```bash
git grep --name-only 'anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp.\)\?(\(\(Pp.\)\?str\)\?\s*".*[^\.!]")' | xargs sed s'/\(anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp.\)\?(\(\(Pp.\)\?str\)\?\s*".*\s*[^\.! ]\)\s*")/\1.")/g' -i
```
and
```bash
git grep --name-only ' !"' | xargs sed s'/ !"/!"/g' -i
```
The rest were manually edited by looking at the results of
```bash
git grep anomaly | grep '\.ml' | grep -v 'anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp\.\)\?(\(\(Pp.\)\?str\)\?\s*".*\(\.\|!\)")' | grep 'anomaly\($\|[^_]\)' | less
```
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of calling the whole reduction machirery, we check before reducing that
a term is an applied atom, i.e. inductive, constructor, evar or meta. In that
case, the abstract machine acts as the identity but needs to destruct and
reconstruct the whole term, which can be very costly.
This fixes part of bug #5421: vm_compute is very slow at doing nothing, where
recomputation of the type of a big inductive was incredibly expensive.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This performance bug was introduced 9 years ago in a8b0345, where the
responsibility of normalizing the term went from ccnv to eqappr in Reduction.
As a result, all recursive calls to eqappr that were preemptively reducing the
term ended up calling whd_stack twice, once by themselves, and once in the
subsequent call to eqappr.
This caused an important slowdown for conversion-intensive proofs, as the
whd_stack calls CClosure.zip to perfom in-place term sharing, leading to
useless huge re-allocations and repetitive write barriers.
Now that eqappr always head-normalizes the term beforehand, we simply don't
call whd_stack anymore when jumping to eqappr.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We unroll the map operation by hand in two performance-critical cases so as
not to call the generic array allocation function in OCaml, and allocate
directly on the minor heap instead. The generic array function is slow because
it needs to discriminate between float and non-float arrays. The unrolling
replaces this by a simple increment to the minor heap pointer and moves from
the stack.
The quantity of unrolling was determined by experimental measures on various Coq
developments. It looks like most of the maps are for small arrays of size lesser
or equal to 4, so this is what is implemented here. We could probably extend it
to an even bigger number, but that would result in ugly code. From what I've seen,
virtually all maps are of size less than 16, so that we could probably be almost
optimal by going up to 16 unrollings, but the code tradeoffs are not obvious. Maybe
when we have PPX?
|