| Commit message (Collapse) | Author | Age |
|\ |
|
|\ \ |
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | | |
We also make the code of [compact] in kernel/univ.ml a bit clearer.
|
| |/ / |
|
|/ / |
|
| |
| |
| |
| |
| |
| | |
The previous code was mimicking what the C implementation was doing, which
was a quadratic algorithm. We simply use the good old exponential reallocation
strategy that is amortized O(1).
|
| | |
|
|/ |
|
|
|
|
|
|
|
| |
We expected `nparams + nrealargs + consnrealargs` but the `nrealargs`
should not be there. This breaks cumulativity of constructors for any
inductive with indices (so records still work, explaining why the test
case in #6747 works).
|
|\ |
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | | |
In Reductionops.infer_conv we did not have enough information to
properly try to unify irrelevant universes. This requires changing the
Reduction.universe_compare type a bit.
|
| |/
|/|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Previously [fun x : Ind@{i} => x : Ind@{j}] with Ind some cumulative
inductive would try to generate a constraint [i = j] and use
cumulativity only if this resulted in an inconsistency. This is
confusingly different from the behaviour with [Type] and means
cumulativity can only be used to lift between universes related by
strict inequalities. (This isn't a kernel restriction so there might
be some workaround to send the kernel the right constraints, but
not in a nice way.)
See modified test for more details of what is now possible.
Technical notes:
When universe constraints were inferred by comparing the shape of
terms without reduction, cumulativity was not used and so too-strict
equality constraints were generated. Then in order to use cumulativity
we had to make this comparison fail to fall back to full conversion.
When unifiying 2 instances of a cumulative inductive type, if there
are any Irrelevant universes we try to unify them if they are
flexible.
|
|\ \ |
|
|\ \ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We limit fixpoints to Finite inductive types, so that BiFinite
inductives (non-recursive records) are excluded from fixpoint
construction. This is a regression in the sense that e.g. fixpoints
on unit records were allowed before. Primitive records with
eta-conversion are included in the BiFinite types.
Fix deprecation
Fix error message, the inductive type needs to be recursive for fix to work
|
| |/ /
|/| |
| | |
| | |
| | |
| | | |
- Nothing to check in conversion as they have a common supertype
by typing.
- In inference, enforce that one is lower than the other.
|
| | | |
|
| | | |
|
|\ \ \ |
|
|\ \ \ \ |
|
|\ \ \ \ \ |
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Now that the cache is distinct, there should be no nasty side-effects changing
the value of one side while reducing the other.
|
| | | | | | |
|
| | | |_|/
| | |/| |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
We simply treat them as as an application of an atom to its instance,
and in the decompilation phase we reconstruct the instance from the stack.
This grants wish BZ#5659.
|
| |/ / /
| | | |
| | | |
| | | |
| | | | |
This simplifies the representation of values, and brings it closer to
the ones of the native compiler.
|
|/ / / |
|
|\ \ \
| |_|/
|/| | |
|
| |/
|/| |
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This intermediate representation serves two purposes:
1- It is a preliminary step for primitive machine integers, as iterators
will be compiled to Clambda.
2- It makes the VM compilation passes closer to the ones of
native_compute. Once we unifiy the representation of values, we should
be able to factorize the lambda-code generation between the two
compilers, as well as the reification code.
This code was written by Benjamin Grégoire and myself.
|
| | | |
|
|\ \ \
| |/ /
|/| | |
|
|\ \ \
| | | |
| | | |
| | | | |
constraints.
|
| | | |
| | | |
| | | |
| | | |
| | | | |
We defer the computation of the universe quantification to the upper layer,
outside of the kernel.
|
| |/ / |
|
| | |
| | |
| | |
| | |
| | | |
Instead of using a linear representation, we simply use a table that maps
every kind of relocation to the list of positions it needs to be applied to.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The previous implementation used a list of pairs, which has size 9n where n
is the number of relocations. We instead use two arrays for a total memory
cost of 2n + 5 words.
The use of arrays may turn out to be problematic on 32-bit machines, I am unsure
if we will hit this limitation in practice.
|
| | |
| | |
| | |
| | | |
Instead we thread a buffer.
|
| | |
| | |
| | |
| | |
| | | |
This is actually not needed, as the only thing we do with this Bytes.t is to
pass it to a C function which will use it in a read-only way.
|
| | |
| | |
| | |
| | |
| | | |
This shouldn't matter because the tcode_of_code function is pure, its only
effect being allocating a string and filling it with the translated bytecode.
|
|/ /
| |
| |
| | |
This reduces the possibility to wreak havoc while making the API nicer.
|
|\ \ |
|
|\ \ \ |
|
|\ \ \ \ |
|
|\ \ \ \ \
| | | | | |
| | | | | |
| | | | | | |
information.
|
|\ \ \ \ \ \ |
|
|\ \ \ \ \ \ \ |
|
| |_|_|_|_|/ /
|/| | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
This bug was present since the first patch adding universe polymorphism
handling in the VM (Coq 8.5). Note that unsoundness can probably be
observed even without universe polymorphism.
|