| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
| |
The function is defined with a typo but called with the same env that
is mistakenly not shadowed.
An alternative to this commit would be to fix the typo.
|
|
|
|
| |
Unused since d95306323 (remove template polymorphic definitions).
|
|\
| |
| |
| | |
kernel typing.
|
| |
| |
| |
| |
| |
| |
| | |
This reduces kernel bloat and removes code from the TCB, as compatibility
projections are now retypechecked as normal definitions would have been.
This should have no effect on efficiency as this only happens once at
definition time.
|
| |
| |
| |
| |
| |
| | |
This field used to signal that a constant was the compatibility
eta-expansion of a primitive projections, but since a previous cleanup in
the kernel it had become useless.
|
| |
| |
| |
| |
| | |
Instead of having the projection data in the constant data we have it
independently in the environment.
|
| |
| |
| |
| |
| | |
We now have only two notions of environments in the kernel: env and
safe_env.
|
|/
|
|
|
| |
For some reason the code was returning int option when only the value of
the integer was relevant.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In practice, we only send to the kernel polymorphic definitions whose
side-effects have been exported first, and furthermore their bodies have
already been forced. Therefore, no constraints are generated by the kernel.
Nonetheless, it might be desirable in the future to also defer computation
of polymorphic proofs whose constraints have been explicited in the type.
It is not clear when this is going to be implemented. Nonetheless, the
current check is not too drastic as it only prevents monomorphic side-effects
to appear inside polymorphic opaque definitions. The only way I know of to
trigger such a situation is to generate a scheme in a proof, as even abstract
is actually preserving the polymorphism status of the surrounding proof. It
would be possible to work around such rare instances anyways.
As for internal constraints generated by a potentially deferred polymorphic
body, it is easy to check that they are a subset of the exported ones at a
higher level inside the future computation and fail there.
So in practice this patch is not too restrictive and it highlights a rather
strong invariant of the kernel code.
|
|
|
|
|
| |
This datatype enforces stronger invariants, e.g. that we only have in the
substitution codomain a connex interval of variables from 0 to n - 1.
|
|
|
|
|
|
| |
We force the upper layers to extrude the universe constraints before sending
it to the kernel. This simplifies the suspicious handling of polymorphic
constraints for section-local definitions.
|
|
|
|
| |
This allows to statically ensure well-formedness properties.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Let definitions have the same behaviour if they are ended with a Qed or a
Defined command, i.e. they are treated as if they were transparent. Indeed,
it doesn't make sense for them to be opaque as they are going to be expanded
away at the end of the section.
For an unknown reason, handling of side-effects in Let definitions considers
them as if they were opaque, i.e. the effects are inlined in the definition.
This discrepancy has bad consequences in the kernel, where one is forced to
juggle with universe constraints generated by polymorphic Let definitions.
As a first phase of cleaning, we simply enforce by typing that Let definitions
should be purified before reaching the kernel.
This has the intended side-effect to make side-effects persistent in Let
definitions, as if they were indeed truly transparent.
|
|
|
|
|
|
|
|
| |
Also use constant_universes_entry instead of a bool flag to indicate
polymorphism in ParameterEntry.
There are a few places where we convert back to ContextSet because
check_univ_decl returns a UContext, this could be improved.
|
|
|
|
| |
We do up to `Term` which is the main bulk of the changes.
|
|
|
|
| |
This will allow to merge back `Names` with `API.Names`
|
|
|
|
| |
This is a first step towards some of the solutions proposed in #6008.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We make Vernacentries.interp functional wrt state, and thus remove
state-handling from `Future`. Now, a future needs a closure if it
wants to preserve state.
Consequently, `Vernacentries.interp` takes a state, and returns the
new one.
We don't explicitly thread the state in the STM yet, instead, we
recover the state that was used before and pass it explicitly to
`interp`.
I have tested the commit with the files in interactive, but we aware
that some new bugs may appear or old ones be made more apparent.
However, I am confident that this step will improve our understanding
of bugs.
In some cases, we perform a bit more summary wrapping/unwrapping. This
will go away in future commits; informal timings for a full make:
- master:
real 2m11,027s
user 8m30,904s
sys 1m0,000s
- no_futures:
real 2m8,474s
user 8m34,380s
sys 0m59,156s
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Also add an output test for Suggest Proof Using.
This changes the .aux output: instead of getting a key
>context_used "$hyps;$suggest"
where $hyps is a list of the used hypotheses and $suggest is the
;-separated suggestions (or the empty string if Suggest Proof Using is
unset), there is a key
>context_used "$hyps"
and if Suggest Proof Using is set also a key
>suggest_proof_using "$suggest"
For instance instead of
112 116 context_used "B A;A B;All"
we get
112 116 context_used "B A"
112 116 suggest_proof_using "A B;All"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We move toplevel/STM flags from `Flags` to their proper components;
this ensures that low-level code doesn't depend on them, which was
incorrect and source of many problems wrt the interfaces.
Lower-level components should not be aware whether they are running in
batch or interactive mode, but instead provide a functional interface.
In particular:
== Added flags ==
- `Safe_typing.allow_delayed_constants`
Allow delayed constants in the kernel.
- `Flags.record_aux_file`
Output `Proof using` information from the kernel.
- `System.trust_file_cache`
Assume that the file system won't change during our run.
== Deleted flags ==
- `Flags.compilation_mode`
- `Flags.batch_mode`
Additionally, we modify the STM entry point and `coqtop` to account
for the needed state. Note that testing may be necessary and the
number of combinations possible exceeds what the test-suite / regular
use does.
The next step is to fix the initialization problems [c.f. Bugzilla],
which will require a larger rework of the STM interface.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The use of template polymorphism in constants was quite limited, as it
only applied to definitions that were exactly inductive types without any
parameter whatsoever. Furthermore, it seems that following the introduction
of polymorphic definitions, the code path enforced regular polymorphism as
soon as the type of a definition was given, which was in practice almost
always.
Removing this feature had no observable effect neither on the test-suite,
nor on any development that we monitor on Travis. I believe it is safe to
assume it was nowadays useless.
|
|
|
|
| |
This ought to ease the understanding of the code.
|
|
|
|
| |
This was an easy to prove property that I somehow overlooked.
|
|
|
|
|
| |
This is actually useless, the code does not depend on the value of the
entry for side-effects.
|
|
|
|
|
| |
We sprinkle a few GADTs in the kernel in order to statically ensure that
entries are pure, so that we get stronger invariants.
|
| |
|
|
|
|
|
| |
We use an algebraic type instead of a pair of a boolean and the corresponding
data. For now, this is isomorphic, but this allows later change in the structure.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The function turning a side-effect declaration into the corresponding
entry was crazily wrong, as it used a named universe context quantifying
over DeBruijn universe indices. Declaring such entries resulted
in random anomalies.
This fixes bug #5641.
|
| |
| |
| |
| |
| |
| | |
This function breaks the abstraction barrier of abstract universe contexts,
as it provides a way to observe the bound names of such a context. We remove
all the uses that can be easily get rid of with the current API.
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It stores both universe constraints and subtyping information for
blocks of inductive declarations.
At this stage the there is no inference or checking implemented. The
subtyping information simply encodes equality of levels for the condition of
subtyping.
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The [let _typ = ...] comes from before universe polymorphism. In those
times it was [let _typ, cst = ...] which produced something of use.
The asserts come from 359e4ffe3 and before (2006 and before). In those
times [Typeops.infer] rebuilt the term being typed, but nowadays the
assert seems of little use.
|
|/ |
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We remove `edit_id` from the STM. In PIDE they serve a different
purpose, however in Coq they were of limited utility and required many
special cases all around the code.
Indeed, parsing is not an asynchronous operation in Coq, thus having
feedback about parsing didn't make much sense. All clients indeed
ignore such feedback and handle parsing in a synchronous way.
XML protocol clients are unaffected, they rely on the instead on the
Fail value.
This commit supersedes PR#203.
|
|\ \
| |/
|/| |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of browsing the term as many times as there are abstracted constants,
we replace the constants in one pass. We have to be a bit careful to replace
the right variables though, in case there are chained abstracts. This is much
faster.
This solves the second part of bug #5382: Huge case analysis fails in coq8.5.x.
|
| |
| |
| |
| |
| |
| | |
We don't need to look for the size of the whole list to find whether we can
extract a suffix from it, as we can do it in one go instead. This slowness
was observable in abstract-heavy code.
|
|/
|
|
|
|
|
|
|
|
|
| |
In one case, the hashconsed type of a judgement was not used anywhere else.
In another case, the Opaqueproof module was rehashconsing terms that had
already gone through a hashconsing phase. Indeed, most OpaqueDef constructor
applications actually called it beforehand, so that the one performed in
Opaqueproof was most often useless. The only case where this was not true
was at section closing time, so that we tweak the Cooking.cook_constant to
perform hashconsing for us.
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We were doing fishy things in the Term_typing file, where side-effects were
not considered in the right uniquization order because of the uniq_seff_rev
function. It probably did not matter after a9b76df because effects were
(mostly) uniquize upfront, but this is not clear because of the use of the
transparente API in the module.
Now everything has to go through the opaque API, so that a proper dependence
order is ensured.
|
| |
| |
| |
| |
| | |
We move it from Entries to Term_typing and export the few functions needed
to manipulate it in this module.
|
|\| |
|
| | |
|