| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As per https://github.com/coq/coq/pull/716#issuecomment-305140839
Partially using
```bash
git grep --name-only 'anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp.\)\?(\(\(Pp.\)\?str\)\?\s*".*[^\.!]")' | xargs sed s'/\(anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp.\)\?(\(\(Pp.\)\?str\)\?\s*".*\s*[^\.! ]\)\s*")/\1.")/g' -i
```
and
```bash
git grep --name-only ' !"' | xargs sed s'/ !"/!"/g' -i
```
The rest were manually edited by looking at the results of
```bash
git grep anomaly | grep '\.ml' | grep -v 'anomaly\s*\(~label:"[^"]*"\s*\)\?\(Pp\.\)\?(\(\(Pp.\)\?str\)\?\s*".*\(\.\|!\)")' | grep 'anomaly\($\|[^_]\)' | less
```
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is the continuation of #244, we now deprecate `CErrors.error`,
the single entry point in Coq is `user_err`.
The rationale is to allow for easier grepping, and to ease a future
cleanup of error messages. In particular, we would like to
systematically classify all error messages raised by Coq and be sure
they are properly documented.
We restore the two functions removed in #244 to improve compatibility,
but mark them deprecated.
|
|
|
|
|
|
| |
The transition has been done a bit brutally. I think we can still save a
lot of useless normalizations here and there by providing the right API
in EConstr. Nonetheless, this is a first step.
|
|
|
|
|
|
| |
Incidentally, this fixes a printing bug in output/inference.v where the
displayed name of an evar was the wrong one because its type was not
evar-expanded enough.
|
|
|
|
|
|
| |
This removes quite a few unsafe casts. Unluckily, I had to reintroduce
the old non-module based names for these data structures, because I could
not reproduce easily the same hierarchy in EConstr.
|
|
|
|
|
| |
We make mli files look to what they were looking before the move to EConstr
by opening this module.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\ |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
mainly concerning referring to "Context.{Rel,Named}.get_{id,value,type}" functions.
If multiple modules define a function with a same name, e.g.:
Context.{Rel,Named}.get_type
those calls were prefixed with a corresponding prefix
to make sure that it is obvious which function is being called.
|
| |
| |
| |
| | |
Suggested by @ppedrot
|
|/
|
|
|
|
|
| |
As noted by @ppedrot, the first is redundant. The patch is basically a renaming.
We didn't make the component optional yet, but this could happen in a
future patch.
|
|
|
|
| |
For the moment, there is a Closure module in compiler-libs/ocamloptcomp.cm(x)a
|
|
|
|
|
|
| |
module)
For the moment, there is an Error module in compilers-lib/ocamlbytecomp.cm(x)a
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a reimplementation of Hugo's PR#117.
We are trying to address the problem that the name of some reduction functions
was not saying what they were doing (e.g. whd_betadeltaiota was doing let-in
reduction). Like PR#117, we are careful that no function changed semantics
without changing the names. Porting existing ML code should be a matter of
renamings a few function calls.
Also, we introduce more precise reduction flags fMATCH, fFIX, fCOFIX
collectively denominated iota.
We renamed the following functions:
Closure.betadeltaiota -> Closure.all
Closure.betadeltaiotanolet -> Closure.allnolet
Reductionops.beta -> Closure.beta
Reductionops.zeta -> Closure.zeta
Reductionops.betaiota -> Closure.betaiota
Reductionops.betaiotazeta -> Closure.betaiotazeta
Reductionops.delta -> Closure.delta
Reductionops.betalet -> Closure.betazeta
Reductionops.betadelta -> Closure.betadeltazeta
Reductionops.betadeltaiota -> Closure.all
Reductionops.betadeltaiotanolet -> Closure.allnolet
Closure.no_red -> Closure.nored
Reductionops.nored -> Closure.nored
Reductionops.nf_betadeltaiota -> Reductionops.nf_all
Reductionops.whd_betadelta -> Reductionops.whd_betadeltazeta
Reductionops.whd_betadeltaiota -> Reductionops.whd_all
Reductionops.whd_betadeltaiota_nolet -> Reductionops.whd_allnolet
Reductionops.whd_betadelta_stack -> Reductionops.whd_betadeltazeta_stack
Reductionops.whd_betadeltaiota_stack -> Reductionops.whd_all_stack
Reductionops.whd_betadeltaiota_nolet_stack -> Reductionops.whd_allnolet_stack
Reductionops.whd_betadelta_state -> Reductionops.whd_betadeltazeta_state
Reductionops.whd_betadeltaiota_state -> Reductionops.whd_all_state
Reductionops.whd_betadeltaiota_nolet_state -> Reductionops.whd_allnolet_state
Reductionops.whd_eta -> Reductionops.shrink_eta
Tacmach.pf_whd_betadeltaiota -> Tacmach.pf_whd_all
Tacmach.New.pf_whd_betadeltaiota -> Tacmach.New.pf_whd_all
And removed the following ones:
Reductionops.whd_betaetalet
Reductionops.whd_betaetalet_stack
Reductionops.whd_betaetalet_state
Reductionops.whd_betadeltaeta_stack
Reductionops.whd_betadeltaeta_state
Reductionops.whd_betadeltaeta
Reductionops.whd_betadeltaiotaeta_stack
Reductionops.whd_betadeltaiotaeta_state
Reductionops.whd_betadeltaiotaeta
They were unused and having some reduction functions perform eta is confusing
as whd_all and nf_all don't do it.
|
|\ |
|
| | |
|
| |
| |
| |
| |
| | |
Some functions were left in the old paradigm because they are only used by the
unification algorithms, so they are not worthwhile to change for now.
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Originally, rel-context was represented as:
Context.rel_context = Names.Name.t * Constr.t option * Constr.t
Now it is represented as:
Context.Rel.t = LocalAssum of Names.Name.t * Constr.t
| LocalDef of Names.Name.t * Constr.t * Constr.t
Originally, named-context was represented as:
Context.named_context = Names.Id.t * Constr.t option * Constr.t
Now it is represented as:
Context.Named.t = LocalAssum of Names.Id.t * Constr.t
| LocalDef of Names.Id.t * Constr.t * Constr.t
Motivation:
(1) In "tactics/hipattern.ml4" file we define "test_strict_disjunction"
function which looked like this:
let test_strict_disjunction n lc =
Array.for_all_i (fun i c ->
match (prod_assum (snd (decompose_prod_n_assum n c))) with
| [_,None,c] -> isRel c && Int.equal (destRel c) (n - i)
| _ -> false) 0 lc
Suppose that you do not know about rel-context and named-context.
(that is the case of people who just started to read the source code)
Merlin would tell you that the type of the value you are destructing
by "match" is:
'a * 'b option * Constr.t (* worst-case scenario *)
or
Named.Name.t * Constr.t option * Constr.t (* best-case scenario (?) *)
To me, this is akin to wearing an opaque veil.
It is hard to figure out the meaning of the values you are looking at.
In particular, it is hard to discover the connection between the value
we are destructing above and the datatypes and functions defined
in the "kernel/context.ml" file.
In this case, the connection is there, but it is not visible
(between the function above and the "Context" module).
------------------------------------------------------------------------
Now consider, what happens when the reader see the same function
presented in the following form:
let test_strict_disjunction n lc =
Array.for_all_i (fun i c ->
match (prod_assum (snd (decompose_prod_n_assum n c))) with
| [LocalAssum (_,c)] -> isRel c && Int.equal (destRel c) (n - i)
| _ -> false) 0 lc
If the reader haven't seen "LocalAssum" before, (s)he can use Merlin
to jump to the corresponding definition and learn more.
In this case, the connection is there, and it is directly visible
(between the function above and the "Context" module).
(2) Also, if we already have the concepts such as:
- local declaration
- local assumption
- local definition
and we describe these notions meticulously in the Reference Manual,
then it is a real pity not to reinforce the connection
of the actual code with the abstract description we published.
|
|\ |
|
| | |
|
|/
|
|
|
| |
Note: they do not even seem to have a debugging purpose, so better remove
them before they bitrot.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some functions from pretyping/typing.ml and their derivatives were potential
source of evarmap leaks, as they dropped their resulting evarmap. This commit
clarifies the situation by renaming them according to a unsafe_* scheme. Their
sound variant is likewise renamed to their old name. The following renamings
were made.
- Typing.type_of -> unsafe_type_of
- Typing.e_type_of -> type_of
- A new e_type_of function that matches the e_ prefix policy
- Tacmach.pf_type_of -> pf_unsafe_type_of
- A new safe pf_type_of function.
All uses of unsafe_* functions should be eventually eliminated.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
"simpl at" and "change at".
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The main change is that selection of subterm is made similar whether
the given term is fully applied or not.
- The selection of subterm now works as follows depending on whether
the "at" is given, of whether the subterm is fully applied or not,
and whether there are incompatible subterms matching the pattern. In
particular, we have:
"at" given
| subterm fully applied
| | incompatible subterms
| | |
Y Y - it works like in 8.4
Y N - this was broken in 8.4 ("at" was ineffective and it was finding
all subterms syntactically equal to the first one which matches)
N Y Y it now finds all subterms like the first one which matches
while in 8.4 it used to fail (I hope it is not a too risky in-draft
for a semantics we would regret...) (e.g. "destruct (S _)" on
goal "S x = S y + S x" now selects the two occurrences of "S x"
while it was failing before)
N Y N it works like in 8.4
N N - it works like in 8.4, selecting all subterms like the
first one which matches
- Note that the "historical" semantics, when looking for a subterm, to
select all subterms that syntactically match the first subterm to
match the pattern (looking from left to right) is now internally called
"like first".
- Selection of subterms can now find the type by pattern-matching (useful e.g.
for "induction (nat_rect _ _ _ _)")
- A version of Unification.w_unify w/o any conversion is used for
finding the subterm: it could be easily replaced by an other
matching algorithm.
In particular, "destruct H" now works on a goal such as "H:True -> x<=y |- P y".
Secondary change is in the interpretation of terms with existential
variables:
- When several arguments are given, interpretation is delayed at the
time of execution
- Because we aim at eventually accepting "edestruct c" with unresolved
holes in c, we need the sigma obtained from c to be an extension of
the sigma of the tactics, while before, we just type-checked c
independently of the sigma of the tactic
- Finishing the resolution of evars (using type classes, candidates,
pending conversion problems) is made slightly cleaner: it now takes
three states: a term is evaluated in state sigma, leading to state
sigma' >= sigma, with evars finally solved in state sigma'' >=
sigma'; we solve evars in the diff of sigma' and sigma and report
the solution in sigma''
- We however renounce to give now a success semantics to "edestruct c"
when "c" has unresolved holes, waiting instead for a decision on
what to do in the case of a similar eapply (see mail to coqdev).
An auxiliary change is that an "in" clause can be attached to each component
of a "destruct t, u, v", etc.
Incidentally, make_abstraction does not do evar resolution itself any longer.
|
|
|
|
|
|
| |
an updated evar_map, as pattern is working up to universe equalities
that must be kept. Straightforward adaptation of the code depending on
this.
|