| Commit message (Collapse) | Author | Age |
|
|
|
| |
Cf CHANGES for details.
|
|
|
|
| |
Use a much dumber algorithm to recognize the shape of equalities.
|
|
|
|
| |
Take advantage that the provided term is always a variable in Equality.is_eq_x.
|
| |
|
|
|
|
| |
Do not evar-normalize the argument provided by afterHyp.
|
|
|
|
| |
We do not allocate a closure in the main loop, and do so only when needed.
|
|
|
|
|
| |
We do not check for presence of a variable in a global definition when we know
that this variable was not present in the section.
|
|
|
|
| |
Do not normalize all goals beforehand.
|
|
|
|
|
| |
Do not evar-normalize the term to substitute with. The engine should be
insensitive to this kind of modification.
|
|
|
|
|
| |
We use simple variable substitution instead of full-power term
matching.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Now that the plugins are packed, a plugin forms now a unique
compilation unit, and we only need to install the main cmi file of
this plugin (foo_plugin.cmi).
Btw, better variable names (e.g. OMEGACMO instead of OMEGACMA) and
some other cleanup in Makefile.common (no more INITPLUGINS variable,
for instance).
|
|
|
|
|
|
|
|
|
| |
We do not recompute shortest name identifier for global references that were
already traversed. Furthermore, we share the computation of identifiers
between invokations of the name generating function.
This drastically speeds up detyping for huge goals, further mitigating the
shortcomings of the fix for bug #4777.
|
|
|
|
|
|
| |
Now, only 'phony' targets could be declared just via dependencies.
For 'real-file' targets such as doc/refman/html/index.html, there
should be a concrete production rule.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As reported by PMP, this was not yet the case. The culprit
was the build of coqdep_boot by a one-liner ocamlopt taking all
the necessary .ml files as arguments (in the right order). This
was nice and short, and correct wrt dependencies, but had the
inconvenient of building some .cmi *after* their corresponding
.cmx, while the rest of the Makefile relies on the reverse order
(see the section about MLWITHOUTMLI). Hence on the next run,
make was thinking that these .cmx weren't up-to-date.
For solving this issue, we now build coqdep_boot (and other tools)
via a list of .cmx and let our infractructure build them (after
their .cmi). The only drawback is the 6 extra lines to hardcode
the dependencies of the *.cm(o|i|x) needed for coqdep_boot.
(since the .ml.d aren't already taken in account by make at that
time).
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Coq locations already had support for this, but were containing dummy
information. We now don't need anymore to reconstruct this information by
browsing the file when printing an error message or enriching exceptions on the
fly.
It also became easier to interface with Coq since locations emitted by the
lexer now always contain full information.
On the API side, Loc.represent disappeared and Loc.t is now exposed as record.
It is less error-prone than manipulating a tuple of 5 integers. Also,
Loc.create takes 5 arguments instead of 3 and a pair.
|
| |
| |
| |
| | |
using a custom feedback message in response to "Show Ltac Profile."
|
| | |
|
| |
| |
| |
| |
| | |
We do not check that an hypothesis is used in context declarations that
occur before it.
|
| |
| |
| |
| | |
above it.
|
| | |
|
| | |
|
|\ \ |
|
| | | |
|
| | | |
|
|/ /
| |
| |
| | |
This fixes bug #4828 (https://coq.inria.fr/bugs/show_bug.cgi?id=4828).
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
I had to remove code handling the -type-in-type option introduced by commit
9c732a5. We should fix it at some point, but I am not sure that using the
checker with a system known to be blatantly inconsistent makes much sense
anyway.
|
| | | |
|
| | | |
|
| | | |
|
|/ / |
|
| |
| |
| |
| |
| | |
Preserve a compatibility whether the Structural Injection flag is on
or not.
|
| |
| |
| |
| | |
Doing it explicitly because it is in another file.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Comments
--------
- The tactic specialize conveys a somehow intuitive reasoning concept
and I would support continuing maintaining it even if the design
comes in my opinion with some oddities. (Note that the experience of
MathComp and SSReflect also suggests that specialize is an
interesting concept in itself).
There are two variants to specialize:
- specialize (H args) with H an hypothesis looks natural: we
specialize H with extra arguments and the "as pattern" clause comes
naturally as an extension of it, destructuring the result using the
pattern.
- specialize term with bindings makes the choice of fully applying the
term filling missing expressions with bindings and to then behave as
generalize. Wouldn't we like a more fine-grained approach and the
result to remain in the context?
In this second case, the "as" clause works as if the term were posed
in the context with "pose proof".
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There were three versions of injection:
1. "injection term" without "as" clause:
was leaving hypotheses on the goal in reverse order
2. "injection term as ipat", first version:
was introduction hypotheses using ipat in reverse order without
checking that the number of ipat was the size of the injection
(activated with "Unset Injection L2R Pattern Order")
3. "injection term as ipat", second version:
was introduction hypotheses using ipat in left-to-right order
checking that the number of ipat was the size of the injection
and clearing the injecting term by default if an hypothesis
(activated with "Set Injection L2R Pattern Order", default one from 8.5)
There is now:
4. "injection term" without "as" clause, new version:
introducing the components of the injection in the context in
left-to-right order using default intro-patterns "?"
and clearing the injecting term by default if an hypothesis
(activated with "Set Structural Injection")
The new versions 3. and 4. are the "expected" ones in the sense that
they have the following good properties:
- introduction in the context is in the natural left-to-right order
- "injection" behaves the same with and without "as", always
introducing the hypotheses in the goal what corresponds to the
natural expectation as the changes I made in the proof scripts for
adaptation confirm
- clear the "injection" hypothesis when an hypothesis which is the
natural expectation as the changes I made in the proof scripts for
adaptation confirm
The compatibility can be preserved by "Unset Structural Injection" or
by calling "simple injection".
The flag is currently off.
|
| |
| |
| |
| |
| | |
simplifying and generalizing the grammar entries for injection,
discriminate and simplify_eq.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In pat%constr, creating new evars is now allowed only if "eintros" is
given, i.e. "intros" checks that no evars are created, and similarly
e.g. for "injection ... as ... pat%constr".
The form "eintros [...]" or "eintros ->" with the case analysis or
rewrite creating evars is now also supported.
This is not a commitment to say that it is good to have an e- modifier
to tactics. It is just to be consistent with the existing convention.
It seems to me that the "no e-" variants are good for beginners. However,
expert might prefer to use the e-variants by default. Opinions from
teachers and users would be useful.
To be possibly done: do that [= ...] work on hypotheses with side
conditions or parameters based on the idea that they apply the full
injection and not only the restriction of it to goals which are
exactly an equality, as it is today.
|
|/
|
|
|
| |
It is already very old (shipped with Debian oldstable) and adds file name
support in locations.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit documents par:, fixes its semantics so that is
behaves like all:, supports (toplevel) abstract and optimizes
toplevel solve.
`par: solve [tac]` is equivalent to `Ltac tac1 := solve[tac]...par: tac1`
but is optimized for failures: if one goal fails all are aborted
immediately.
`par: abstract tac` runs abstract on the generated proof terms. Nested
abstract calls are not supported.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With par: the scenario is this one:
coqide --- master ---- proof worker 1 (has no par: steps)
---- proof worker 2 (has a par: step)
---- tac worker 2.1
---- tac worker 2.2
---- tac worker 2.3
Actor 2 installs a remote counter for universe levels, that are
requested to master. Multiple threads dealing with actors 2.x
may need to get values from that counter at the same time.
Long story short, in this complex scenario a mutex was missing
and the control threads for 2.x were accessing the counter (hence
reading/writing to the single socket connecting 2 with master at the
same time, "corrupting" the data flow).
A better solution would be to have a way to generate unique fresh universe
levels locally to a worker.
|
| |
|