| Commit message (Collapse) | Author | Age |
|\
| |
| |
| | |
to anomaly)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We remove most of what was deprecated in `Term`. Now, `intf` and
`kernel` are almost deprecation-free, tho I am not very convinced
about the whole `Term -> Constr` renaming but I'm afraid there is no
way back.
Inconsistencies with the constructor policy (see #6440) remain along
the code-base and I'm afraid I don't see a plan to reconcile them.
The `Sorts` deprecation is hard to finalize, opening `Sorts` is not a
good idea as someone added a `List` module inside it.
|
| |
| |
| |
| |
| | |
We now have only two notions of environments in the kernel: env and
safe_env.
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
We bootstrap the circular evar_map <-> econstr dependency by moving
the internal EConstr.API module to Evd.MiniEConstr. Then we make the
Evd functions use econstr.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
We forbid calling `EConstr.to_constr` on terms that are not evar-free,
as to progress towards enforcing the invariant that `Constr.t` is
evar-free. [c.f. #6308]
Due to compatibility constraints we provide an optional parameter to
`to_constr`, `abort` which can be used to overcome this restriction
until we fix all parts of the code.
Now, grepping for `~abort:false` should return the questionable
parts of the system.
Not a lot of places had to be fixed, some comments:
- problems with the interface due to `Evd/Constr` [`Evd.define` being
the prime example] do seem real!
- inductives also look bad with regards to `Constr/EConstr`.
- code in plugins needs work.
A notable user of this "feature" is `Obligations/Program` that seem to
like to generate kernel-level entries with free evars, then to scan
them and workaround this problem by generating constants.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a quick fix to check in nested pattern-matching that the
return clause is typed by an arity (there was already a check when the
return clause was given explicitly - in the 3rd section of
prepare_predicate -; it was automatically typed by a sort when the
return clause was coming by pattern-matching with the type constraint,
since the type constraint is a type; but it was not done when the
predicate was derived from a former predicate in nested
pattern-matching).
Indeed, in nested pattern-matching, we know that the return predicate
has the form λy1..yn.λx:I(y1,..,yn).T(y1,..,yn,x) and we know that
T(v1,...,vn,u) : Type for the effective u:I(v1,..,vn) we are matching
on, but we don't know if T(y1,..,yn,x) is itself a sort (e.g. it can
be a "match" which reduces to a sort when instantiated with v1..vn,
but whose evaluation remains blocked when the y1..yn are variables).
Note that in the bug report, the incorrect typing is produced by small
inversion. In practice, we can raise the anomaly also without small
inversion, so we fix it in the general case. See file 5500.v for
details.
|
| |
|
| |
|
|
|
|
|
|
|
| |
Computation of the sort of the inductive type was done before ensuring
that the arguments of the inductive type had the correct types,
possibly brutally failing with `NotArity` in case one of the types
expected to be typed with an arity was not so.
|
| |
|
|
|
|
| |
We do up to `Term` which is the main bulk of the changes.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The use of template polymorphism in constants was quite limited, as it
only applied to definitions that were exactly inductive types without any
parameter whatsoever. Furthermore, it seems that following the introduction
of polymorphic definitions, the code path enforced regular polymorphism as
soon as the type of a definition was given, which was in practice almost
always.
Removing this feature had no observable effect neither on the test-suite,
nor on any development that we monitor on Travis. I believe it is safe to
assume it was nowadays useless.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As per https://github.com/coq/coq/pull/716#issuecomment-305140839
Partially using
```bash
git grep --name-only 'anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp.\)\?(\(\(Pp.\)\?str\)\?\s*".*[^\.!]")' | xargs sed s'/\(anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp.\)\?(\(\(Pp.\)\?str\)\?\s*".*\s*[^\.! ]\)\s*")/\1.")/g' -i
```
and
```bash
git grep --name-only ' !"' | xargs sed s'/ !"/!"/g' -i
```
The rest were manually edited by looking at the results of
```bash
git grep anomaly | grep '\.ml' | grep -v 'anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp\.\)\?(\(\(Pp.\)\?str\)\?\s*".*\(\.\|!\)")' | grep 'anomaly\($\|[^_]\)' | less
```
|
|\ |
|
| |
| |
| |
| | |
Now it is a private field, locations are optional.
|
|/
|
|
| |
Also remove obvious comments.
|
|
|
|
|
|
| |
The transition has been done a bit brutally. I think we can still save a
lot of useless normalizations here and there by providing the right API
in EConstr. Nonetheless, this is a first step.
|
|
|
|
|
| |
For now we only normalize sorts, and we leave instances for the next
commit.
|
|
|
|
|
|
| |
Incidentally, this fixes a printing bug in output/inference.v where the
displayed name of an evar was the wrong one because its type was not
evar-expanded enough.
|
|
|
|
|
|
| |
This removes quite a few unsafe casts. Unluckily, I had to reintroduce
the old non-module based names for these data structures, because I could
not reproduce easily the same hierarchy in EConstr.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
This allows to factorize code and prevents the unnecessary use of back and
forth conversions between the various types of terms.
Note that functions from typing may now raise errors as PretypeError rather
than TypeError, because they call the proper wrapper. I think that they were
wrongly calling the kernel because of an overlook of open modules.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\ |
|
|/| |
|
| |
| |
| |
| |
| | |
Typing.type_of was using conversion for types of fixpoints while it
could have used unification.
|
| | |
|
| |
| |
| |
| |
| |
| | |
module)
For the moment, there is an Error module in compilers-lib/ocamlbytecomp.cm(x)a
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is a reimplementation of Hugo's PR#117.
We are trying to address the problem that the name of some reduction functions
was not saying what they were doing (e.g. whd_betadeltaiota was doing let-in
reduction). Like PR#117, we are careful that no function changed semantics
without changing the names. Porting existing ML code should be a matter of
renamings a few function calls.
Also, we introduce more precise reduction flags fMATCH, fFIX, fCOFIX
collectively denominated iota.
We renamed the following functions:
Closure.betadeltaiota -> Closure.all
Closure.betadeltaiotanolet -> Closure.allnolet
Reductionops.beta -> Closure.beta
Reductionops.zeta -> Closure.zeta
Reductionops.betaiota -> Closure.betaiota
Reductionops.betaiotazeta -> Closure.betaiotazeta
Reductionops.delta -> Closure.delta
Reductionops.betalet -> Closure.betazeta
Reductionops.betadelta -> Closure.betadeltazeta
Reductionops.betadeltaiota -> Closure.all
Reductionops.betadeltaiotanolet -> Closure.allnolet
Closure.no_red -> Closure.nored
Reductionops.nored -> Closure.nored
Reductionops.nf_betadeltaiota -> Reductionops.nf_all
Reductionops.whd_betadelta -> Reductionops.whd_betadeltazeta
Reductionops.whd_betadeltaiota -> Reductionops.whd_all
Reductionops.whd_betadeltaiota_nolet -> Reductionops.whd_allnolet
Reductionops.whd_betadelta_stack -> Reductionops.whd_betadeltazeta_stack
Reductionops.whd_betadeltaiota_stack -> Reductionops.whd_all_stack
Reductionops.whd_betadeltaiota_nolet_stack -> Reductionops.whd_allnolet_stack
Reductionops.whd_betadelta_state -> Reductionops.whd_betadeltazeta_state
Reductionops.whd_betadeltaiota_state -> Reductionops.whd_all_state
Reductionops.whd_betadeltaiota_nolet_state -> Reductionops.whd_allnolet_state
Reductionops.whd_eta -> Reductionops.shrink_eta
Tacmach.pf_whd_betadeltaiota -> Tacmach.pf_whd_all
Tacmach.New.pf_whd_betadeltaiota -> Tacmach.New.pf_whd_all
And removed the following ones:
Reductionops.whd_betaetalet
Reductionops.whd_betaetalet_stack
Reductionops.whd_betaetalet_state
Reductionops.whd_betadeltaeta_stack
Reductionops.whd_betadeltaeta_state
Reductionops.whd_betadeltaeta
Reductionops.whd_betadeltaiotaeta_stack
Reductionops.whd_betadeltaiotaeta_state
Reductionops.whd_betadeltaiotaeta
They were unused and having some reduction functions perform eta is confusing
as whd_all and nf_all don't do it.
|
| | |
|
| |
| |
| |
| |
| | |
This allows a smooth addition of various unsafe flags without wreaking
havoc in the ML codebase.
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The rational is that
1. further typing flags may be available in the future
2. it makes it easier to trace and document the argument
|
| | |
| | |
| | |
| | |
| | | |
Some parts of Evarutils were related to the management of evars under constraints.
We put them in the Evardefine file.
|