aboutsummaryrefslogtreecommitdiffhomepage
path: root/grammar
Commit message (Collapse)AuthorAge
* Tentative fix for bug #3957.Gravatar Pierre-Marie Pédrot2015-01-27
| | | | | | | | | | | | | | | | Now that ML tactics are not dispatched according to the type of their arguments anymore, one has to take care of the way potentially atomic tactics are handled. This patch ensures that the atomic tactics generated by the TACTIC EXTEND macro have the right length and the right order. There may be very rare trouble if two ML tactics in the same entry are of the form "foo" x1 ... xn "foo" y1 ... ym where all xi and yi may be empty. I doubt that the legacy implementation behaved well in this case anyway.
* Splitting ML tactics in one function per grammar entry.Gravatar Pierre-Marie Pédrot2015-01-23
| | | | | Furthermore, ML tactic dispatch is not done according to the type of its argument anymore.
* Embedding the index of the ML tactic entry in the Tacexpr AST.Gravatar Pierre-Marie Pédrot2015-01-21
| | | | | This will allow to get rid of the fragile mechanism of discriminating which entry to call depending on the dynamic type of its arguments.
* Update headers.Gravatar Maxime Dénès2015-01-12
|
* A global [gfail] tactic which works like [fail] except that it fails even if ↵Gravatar Arnaud Spiwack2014-12-23
| | | | | | there is no focused goal. The 'g' is for "global". The arguments are the same as [fail]. Beware: [let x := constr:… in tac] is a goal-local operation regardless of whether [tac] is goal-local or not.
* Fix compilation error in some configurations.Gravatar Arnaud Spiwack2014-12-23
| | | | | | This was due to the unqualified uses of "Lazy" being disambiguated in different manners. I just changed the constructor name to "Select". Fixes #3877.
* Add a backtracking version of Ltac's [match].Gravatar Arnaud Spiwack2014-12-19
| | | | [multimatch … with …] returns every possible successes: every matching branch and every successes of these matching branch, so that subsequent tactics can backtrack as well.
* Fixing CAMLP4 compilation.Gravatar Pierre-Marie Pédrot2014-12-16
|
* Extend the syntax of simpl with a delta flag.Gravatar Arnaud Spiwack2014-12-12
| | | | You can write 'simpl -[plus minus] div2'. Simpl does not use it for now.
* Enforcing a stronger difference between the two syntaxes "simplGravatar Hugo Herbelin2014-11-16
| | | | | | | | | | | | | | | | | | | | | | | | | | | reference" and "simpl pattern" in the code (maybe we should have merged them instead, but I finally decided to enforce their difference, even if some compatibility is to be preversed - the idea is that at some time "simpl reference" would only call a weak-head simpl (or eventually cbn), leading e.g. to reduce 2+n into S(1+n) rather than S(S(n)) which could be useful for better using induction hypotheses. In the process we also implement the following: - 'simpl "+"' is accepted to reduce all applicative subterms whose head symbol is written "+" (in the toplevel scope); idem for vm_compute and native_compute - 'simpl reference' works even if reference has maximally inserted implicit arguments (this solves the "simpl fst" incompatibility) - compatibility of ltac expressions referring to vm_compute and native_compute with functor application should now work (i.e. vm_compute and native_compute are now taken into account in tacsubst.ml) - for compatibility, "simpl eq" (assuming no maximal implicit args in eq) or "simpl @eq" to mean "simpl (eq _ _)" are still allowed. By the way, is "mul" on nat defined optimally? "3*n" simplifies to "n+(n+(n+0))". Are there some advantages of this compared to have it simplified to "n+n+n" (i.e. to "(n+n)+n").
* Removing a unused boolean in the TacMove node of tacexpr AST.Gravatar Pierre-Marie Pédrot2014-11-09
|
* Continuing 3741c46fe134 on reporting ltac error.Gravatar Hugo Herbelin2014-11-08
| | | | | | | | | | | | | | | | | | | | | | | | | - Do use the flag for_ml for distinguishing coq level and ml level ltac definitions. - Skip ML call from the trace. There are still differences from 8.4 and trunk. For instance on: Ltac f x := refine x. Goal False. f I. 8.4 says: In nested Ltac calls to "f" and "x" (with x:=I), last term evaluation failed. Error: The term "I" has type "True" while it is expected to have type "False". trunk says: In nested Ltac calls to "f" and "refine <genarg:uconstr>", last call failed. Error: The term "I" has type "True" while it is expected to have type "False". Maybe we would like a mix of both (besides the printing of a non-elegant "<genarg:uconstr>)?
* This commit introduces changes in induction and destruct.Gravatar Hugo Herbelin2014-10-25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The main change is that selection of subterm is made similar whether the given term is fully applied or not. - The selection of subterm now works as follows depending on whether the "at" is given, of whether the subterm is fully applied or not, and whether there are incompatible subterms matching the pattern. In particular, we have: "at" given | subterm fully applied | | incompatible subterms | | | Y Y - it works like in 8.4 Y N - this was broken in 8.4 ("at" was ineffective and it was finding all subterms syntactically equal to the first one which matches) N Y Y it now finds all subterms like the first one which matches while in 8.4 it used to fail (I hope it is not a too risky in-draft for a semantics we would regret...) (e.g. "destruct (S _)" on goal "S x = S y + S x" now selects the two occurrences of "S x" while it was failing before) N Y N it works like in 8.4 N N - it works like in 8.4, selecting all subterms like the first one which matches - Note that the "historical" semantics, when looking for a subterm, to select all subterms that syntactically match the first subterm to match the pattern (looking from left to right) is now internally called "like first". - Selection of subterms can now find the type by pattern-matching (useful e.g. for "induction (nat_rect _ _ _ _)") - A version of Unification.w_unify w/o any conversion is used for finding the subterm: it could be easily replaced by an other matching algorithm. In particular, "destruct H" now works on a goal such as "H:True -> x<=y |- P y". Secondary change is in the interpretation of terms with existential variables: - When several arguments are given, interpretation is delayed at the time of execution - Because we aim at eventually accepting "edestruct c" with unresolved holes in c, we need the sigma obtained from c to be an extension of the sigma of the tactics, while before, we just type-checked c independently of the sigma of the tactic - Finishing the resolution of evars (using type classes, candidates, pending conversion problems) is made slightly cleaner: it now takes three states: a term is evaluated in state sigma, leading to state sigma' >= sigma, with evars finally solved in state sigma'' >= sigma'; we solve evars in the diff of sigma' and sigma and report the solution in sigma'' - We however renounce to give now a success semantics to "edestruct c" when "c" has unresolved holes, waiting instead for a decision on what to do in the case of a similar eapply (see mail to coqdev). An auxiliary change is that an "in" clause can be attached to each component of a "destruct t, u, v", etc. Incidentally, make_abstraction does not do evar resolution itself any longer.
* Seeing IntroWildcard as an action intro pattern rather than as a naming patternGravatar Hugo Herbelin2014-09-30
| | | | | | | | | | (the action is "clear"). Added subst_intropattern which was missing since the introduction of ApplyOn intro patterns. Still to do: make "intros _ ?id" working without interferences when "id" is precisely the internal name used for hypotheses to discard.
* Add syntax for naming new goals in refine: writing ?[id] instead of _Gravatar Hugo Herbelin2014-09-30
| | | | | | | | will name the goal id; writing ?[?id] will use the first fresh name available based with prefix id. Tactics intro, rename, change, ... from logic.ml now preserve goal name; cut preserves goal name on its main premise.
* Parsing evar instances.Gravatar Hugo Herbelin2014-09-12
|
* VernacExtend does not dispatch on type anymore.Gravatar Pierre-Marie Pédrot2014-09-10
|
* Parsing of Type@{max(i,j)}.Gravatar Matthieu Sozeau2014-09-08
|
* Renaming goal-entering functions.Gravatar Pierre-Marie Pédrot2014-09-06
| | | | | | | | | | | 1. Proofview.Goal.enter into Proofview.Goal.nf_enter. 2. Proofview.Goal.raw_enter into Proofview.Goal.enter. 3. Proofview.Goal.goals -> Proofview.Goals.nf_goals 4. Proofview.Goal.raw_goals -> Proofview.Goals.goals 5. Ftactic.goals -> Ftactic.nf_goals 6. Ftactic.raw_goals -> Ftactic.goals This is more uniform with the other functions of Coq.
* Moving code of tactic interpretation from Tacenv to Vernacentries.Gravatar Pierre-Marie Pédrot2014-08-31
| | | | | This allows to directly register globtactics in the Tacenv API, without having to resort to any internalization function.
* Reorganisation of intropattern codeGravatar Hugo Herbelin2014-08-18
| | | | | | | | | | | | | | | | - emphasizing the different kinds of patterns - factorizing code of the non-naming intro-patterns Still some questions: - Should -> and <- apply to hypotheses or not (currently they apply to hypotheses either when used in assert-style tactics or apply in, or when the term to rewrite is a variable, in which case "subst" is applied)? - Should "subst" be used when the -> or <- rewrites an equation x=t posed by "assert" (i.e. rewrite everywhere and clearing x and hyp)? - Should -> and <- be applicable in non assert-style if the lemma has quantifications?
* Moving the TacExtend node from atomic to plain tactics.Gravatar Pierre-Marie Pédrot2014-08-18
| | | | | Also taking advantage of the change to rename it into TacML. Ultimately should allow ML tactic to return values.
* Removing simple induction / destruct from the AST.Gravatar Pierre-Marie Pédrot2014-08-07
|
* Instead of relying on a trick to make the constructor tactic parse, putGravatar Pierre-Marie Pédrot2014-08-07
| | | | | | | | | | all the tactics using the constructor keyword in one entry. This has the side-effect to also remove the other variant of constructor from the AST. I also needed to hack around the "tauto" tactic to make it work, by calling directly the ML tactic through a TacExtend node. This may be generalized to get rid of the intermingled dependencies between this tactic and the infamous Ltac quotation mechanism.
* Removing the "constructor" tactic from the AST.Gravatar Pierre-Marie Pédrot2014-08-07
|
* Removing "intros untils" from the AST.Gravatar Pierre-Marie Pédrot2014-08-06
|
* Experimentally adding an option for automatically erasing anGravatar Hugo Herbelin2014-08-05
| | | | | | | | | | hypothesis when using it in apply or rewrite (prefix ">", undocumented), and a modifier to explicitly keep it in induction or destruct (prefix "!", reminiscent of non-linerarity). Also added undocumented option "Set Default Clearing Used Hypotheses" which makes apply and rewrite default to erasing the hypothesis they use (if ever their argument is indeed an hypothesis of the context).
* Adding a syntax "enough" for the variant of "assert" with the order ofGravatar Hugo Herbelin2014-08-05
| | | | subgoals and the role of the "by tac" clause swapped.
* STM: encapsulate Pp.message in Feedback.feedbackGravatar Carst Tankink2014-08-04
|
* Qualified ML tactic names. The plugin name is used to discriminateGravatar Pierre-Marie Pédrot2014-07-27
| | | | potentially conflicting tactics names from different plugins.
* Removing dead code relative to or_metaid.Gravatar Pierre-Marie Pédrot2014-07-25
|
* Distinguish tactics t1;t2 and t1;[t2..].Gravatar Arnaud Spiwack2014-07-24
| | | They used to be the same (and had a single entry in the AST). But now that t2 can be a multi-goal tactic, t1;[t2..] has the semantics of executing t2 in each goal independently.
* Fix g_coqast for explicit applications.Gravatar Matthieu Sozeau2014-07-07
|
* all coqide specific files moved into ide/Gravatar Enrico Tassi2014-06-25
| | | | | | | | | | | | | | | | | | | | lib/interface split into: - lib/feedback subscribe-based feedback bus (also used by coqidetop) - ide/interface definition of coqide protocol messages lib/pp structured info/err/warn messages lib/serialize split into: - lib/serialize generic xml serialization (list, pairs, int, loc, ...) used by coqide but potentially useful to other interfaces - ide/xmlprotocol serialization of protocol messages as in ide/interface the only drawback is that coqidetop needs -thread and I had to pass that option to all files in ide/
* Moving the [split] tactic out of the AST.Gravatar Pierre-Marie Pédrot2014-06-06
|
* Fixing TACTIC EXTEND for arguments-free tactics that may modify the wholeGravatar Pierre-Marie Pédrot2014-05-24
| | | | | | | | proof. Indeed, computing an empty list of arguments triggered a Proofview.Goal.enter, which broke tactics like [shelve_unifiable]. This does not fix this particular tactic though, because the Ltac interpreter still enters the goal when calling a Ltac reference.
* Removing useless use of metaids in tactic AST.Gravatar Pierre-Marie Pédrot2014-05-22
|
* Moving left & right tactics out of the AST.Gravatar Pierre-Marie Pédrot2014-05-21
|
* Moving (e)transitivity out of the AST.Gravatar Pierre-Marie Pédrot2014-05-20
|
* Tactics declared through TACTIC EXTEND that are of the formGravatar Pierre-Marie Pédrot2014-05-20
| | | | | | | | | "foobar" constr(x1) ... constr(xn) are now defined as pure Ltac definitions, and do not add grammar nor printing rules. This partially relies on a hack consisting in retrieving the arguments in the tactic environment rather than as directly passed to the TacExtend node.
* Tentative to add constr-using primitive tactics without grammar rules.Gravatar Pierre-Marie Pédrot2014-05-20
| | | | | | | | We eta-expand primitive Ltac functions, and instead of feeding TacExtend directly with its arguments, we use the environment to retrieve them. Some tactics from the AST were also moved away and made using this mechanism.
* Fixing Camlp4 compilationGravatar Pierre-Marie Pédrot2014-05-17
|
* Moving argument-free tactics out of the AST into a dedicatedGravatar Pierre-Marie Pédrot2014-05-16
| | | | "coretactics.ml4" file.
* Tactics defined through TACTIC EXTEND that are only defined as a string doGravatar Pierre-Marie Pédrot2014-05-16
| | | | | not create grammar and printing rules anymore, they define Ltac entries in the module that declares them instead.
* Now parsing rules of ML-declared tactics are only made available after theGravatar Pierre-Marie Pédrot2014-05-12
| | | | | | | | | | | | | corresponding Declare ML Module command. This changes essentially two things: 1. ML plugins are forced to use the DECLARE PLUGIN statement before any TACTIC EXTEND statement. The plugin name must be exactly the string passed to the Declare ML Module command. 2. ML tactics are only made available after the Coq module that does the corresponding Declare ML Module is imported. This may break a few things, as it already broke quite some uses of omega in the stdlib.
* Moving the ML tactic extension mechanism to a Libobject-based one.Gravatar Pierre-Marie Pédrot2014-05-12
|
* Plugin names must be declared in the header of .ml4 file, be they static orGravatar Pierre-Marie Pédrot2014-05-12
| | | | dynamic. This is done with the "DECLARE PLUGIN \"name\"" macro.
* This commit adds full universe polymorphism and fast projections to Coq.Gravatar Matthieu Sozeau2014-05-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add [Polymorphic] and [Monomorphic] local flag for definitions as well as [Set Universe Polymorphism] global flag to make all following definitions polymorphic. Mainly syntax for now. First part of the big changes to the kernel: - Const, Ind, Construct now come with a universe level instance - It is used for type inference in the kernel, which now also takes a graph as input: actually a set of local universe variables and their constraints. Type inference just checks that the constraints are enough to satisfy its own rules. - Remove polymorphic_arity and _knowing_parameters everywhere: we don't need full applications for polymorphism to apply anymore, as we generate fresh variables at each constant/inductive/constructor application. However knowing_parameters variants might be reinstated later for optimization. - New structures exported in univ.mli: - universe_list for universe level instances - universe_context(_set) for the local universe constraints, also recording which variables will be local and hence generalized after inference if defining a polymorphic ind/constant. - this patch makes coq stop compiling at indtypes.ml Adapt kernel, library, pretyping, tactics and toplevel to universe polymorphism. Various degrees of integration, places where I was not sure what to do or just postponed bigger reorganizations of the code are marked with FIXMEs. Main changes: - Kernel now checks constraints and does not infer them anymore. - The inference functions produce a context of constraints that were checked during inference, useful to do double-checking of the univ. poly. code but might be removed later. - Constant, Inductive entries now have a universe context (local variables and constraints) associated to them. - Printing, debugging functions for the new structures are also implemented. - Now stopping at Logic.v - Lots of new code in kernel/univ.ml that should be reviewed. - kernel/indtypes probably does not do what's right when inferring inductive type constraints. - Adapted evd to use the new universe context structure. - Did not deal with unification/evar_conv. - Add externalisation code for universe level instances. - Support for polymorphism in pretyping/command and proofs/proofview etc. Needed wrapping of [fresh_.._instance] through the evar_map, which contains the local state of universes during type-checking. - Correct the inductive scheme generation to support polymorphism as well. - Have to review kernel code for correctness, and especially rework the computation of universe constraints for inductives. Stops somewhat later in Logic.v - Fix naming of local/toplevel universes to be correctly done at typechecking time: local variables have no dirpath. - Add code to do substitution of universes in modules, not finished yet. - Move fresh_* functions out of kernel, it won't ever build a universe level again! - Adapt a lot of new_Type to use the correct dirpath and declare the new types in the evar_map so we keep track of them. - A bit of code factorization (evd_comb moved, pretype_global). - Refactor more code - Adapt plugins code (sometimes wrong, marked with FIXME) - Fix cases generating unneeded universe (not sure it's ok though) - Fix scheme generation for good, might have opportunity to cleanup the terms later. Init compiles now (which means rewrite, inversion, elim etc.. work as well). - Unsolved issue of pretyping to lower sorts properly (to Prop for example). This has to do with the (Retyping.get_type_of) giving algebraic universes that would appear on the right of constraints. This makes checking for dangling universes at the end of pretyping fail, hence the check in kernel/univ was removed. It should come back when we have a fix for this. - Correctly (?) compute the levels of inductive types. Removed old code pertaining to universe polymorphism. Note that we generate constraint variables for the conclusion of inductive types invariably. - Shrink constraints before going to the kernel, combine substitution of the smaller universe set with normalization of evars (maybe not done everywhere, only ordinary inductives, definitions and proofs) - More API reworks overall. tclPUSHCONTEXT can be used to add fresh universes to the proof goal (used in a few places to get the right instance. - Quick fix for auto that won't work in the long run. It should always have been restricted to take constant references as input, without any loss of generality over constrs. Fix some plugins and insertion of non-polymorphic constants in a module. Now stops in relation classes. Cleanup and move code from kernel to library and from pretyping to library too. Now there is a unique universe counter declared in library/universes.ml along with all the functions to generate new universes and get fresh constant/inductive terms. - Various function renamings - One important change in kernel/univ.ml: now [sup] can be applied to Prop. - Adapt records/classes to universe polymorphism - Now stops in EqDepFacts due to imprecise universe polymorphism. Forgot to git add those files. interp_constr returns the universe context The context is then pushed through the environment (or proof goal sigma). - Fix insertion of constants/inductives in env, pushing constraints to the global env for non-polymorphic ones. - Add Prop as a universe level to do proper type inference with sorts. It is allowed to take [sup] of [Prop] now. - New nf_evar based on new Evd.map(_undefined) - In proofs/logic.ml: conv_leq_goal might create some constraints that are now recorded. - Adapt Program code to universes. Merge with latest trunk + fixes -Use new constr_of_global from universes - fix eqschemes to use polymorphic universes - begin fixing cctac but f_equal still fails - fix [simpl] and rest of tacred - all the eq_constr with mkConst foo should be fixed as well, only partially done - Fix term hashing function to recognize equal terms up to universe instances. - Fix congruence closure to equate terms that differ only in universe instances, these will be resolved by constraints. Add a set of undefined universe variables to unification. Universe variables can now be declared rigid or flexible (unifiable). Flexible variables are resolved at the end of typechecking by instantiating them to their glb, adding upper bound constraints associated to them. Also: - Add polymorphic flag for inductives. - Fix cooking partially - Fix kernel/univ.ml to do normalization of universe expressions at the end of substitution. Correct classes/structures universe inference - Required a bit of extension in Univ to handle Max properly (sup u (u+1)) was returning (max(u,u+1)) for example. - Try a version where substitution of universe expressions for universe levels is allowed at the end of unification. By an invariant this should only instantiate with max() types that are morally "on the right" only. This is controlled using a rigidity attribute of universe variables, also allowing to properly do unification w.r.t. universes during typechecking/inference. - Currently fails in Vectors/Fin.v because case compilation generates "flexible" universes that actually appear in the term... Fix unification of universe variables. - Fix choice of canonical universe in presence of universe constraints, and do so by relying on a trichotomy for universe variables: rigid (won't be substituted), flexible (might be if not substituted by an algebraic) and flexible_alg (always substituted). - Fix romega code and a few more plugins, most of the standard library goes through now. - Had to define some inductives as Polymorphic explicitly to make proofs go through, more to come, and definitions should be polymorphic too, otherwise inconsistencies appear quickly (two uses of the same polymorphic ind through monomorphic functions (like nth on lists of Props and nats) will fix the monomorphic function's universe with eq constraints that are incompatible). - Correct universe polymorphism handling for fixpoint/cofixpoint definitions. - Fix romega to use the right universes for list constructors. - Fix internalization/externalization to deal properly with the implicit parsing of params. - Fix fourier tactic w.r.t. GRefs - Fix substitution saturation of universes. - Fix number syntax plugin. - Fix setoid_ring to take its coefficients in a Set rather than a Type, avoiding a large number of useless universe constraints. - Fix minor checker decl - Fix btauto w.r.t. GRef - Fix proofview to normalize universes in the original types as well. - Fix definitions of projections to not take two universes at the same level, but at different levels instead, avoiding unnecessary constraints that could lower the level of one component depending on the use of the other component. Fix simpl fst, snd to use @fst @snd as they have maximal implicits now. - More simpl snd, fst fixes. - Try to make the nth theory of lists polymorphic. Check with Enrico if this change is ok. Case appearing in RingMicromega's call to congruence l417, through a call to refine -> the_conv_x_leq. Compile everything. - "Fix" checker by deactivating code related to polymorphism, should be updated. - Make most of List.v polymorphic to help with following definitions. - When starting a lemma, normalize w.r.t. universes, so that the types get a fixed universe, not refinable later. - In record, don't assign a fully flexible universe variable to the record type if it is a definitional typeclass, as translate_constant doesn't expect an algebraic universe in the type of a constant. It certainly should though. - Fix micromega code. Fix after rebase. Update printing functions to print the polymorphic status of definitions and their universe context. Refine printing of universe contexts - Fix printer for universe constraints - Rework normalization of constraints to separate the Union-Find result from computation of lubs/glbs. Keep universe contexts of inductives/constants in entries for correct substitution inside modules. Abstract interface to get an instantiation of an inductive with its universe substitution in the kernel (no substitution if the inductive is not polymorphic, even if mind_universes is non-empty). Make fst and snd polymorphic, fix instances in RelationPairs to use different universes for the two elements of a pair. - Fix bug in nf_constraints: was removing Set <= constraints, but should remove Prop <= constraints only. - Make proj1_sig, projT1... polymorphic to avoid weird universe unifications, giving rise to universe inconsistenties. Adapt auto hints to polymorphic references. Really produce polymorphic hints... second try - Remove algebraic universes that can't appear in the goal when taking the type of a lemma to start. Proper handling of universe contexts in clenv and auto so that polymorphic hints are really refreshed at each application. Fix erroneous shadowing of sigma variable. - Make apparent the universe context used in pretyping, including information about flexibility of universe variables. - Fix induction to generate a fresh constant instance with flexible universe variables. Add function to do conversion w.r.t. an evar map and its local universes. - Fix define_evar_as_sort to not forget constraints coming from the refinement. - Do not nf_constraints while we don't have the whole term at hand to substitute in. - Move substitution of full universes to Universes - Normalize universes inside an evar_map when doing nf_evar_map_universes. - Normalize universes at each call to interp_ltac (potentially expensive) Do not normalize all evars at each call to interp_gen in tactics: rather incrementally normalize the terms at hand, supposing the normalization of universes will concern only those appearing in it (dangerous but much more efficient). Do not needlessly generate new universes constraints for projections of records. Correct polymorphic discharge of section variables. Fix autorewrite w.r.t. universes: polymorphic rewrite hints get fresh universe instances at each application. Fix r2l rewrite scheme to support universe polymorphism Fix a bug in l2r_forward scheme and fix congruence scheme to handle polymorphism correctly. Second try at fixing autorewrite, cannot do without pushing the constraints and the set of fresh universe variables into the proof context. - tclPUSHCONTEXT allow to set the ctx universe variables as flexible or rigid - Fix bug in elimschemes, not taking the right sigma Wrong sigma used in leibniz_rewrite Avoid recomputation of bounds for equal universes in normalization of constraints, only the canonical one need to be computed. Make coercions work with universe polymorphic projections. Fix eronneous bound in universes constraint solving. Make kernel reduction and term comparison strictly aware of universe instances, with variants for relaxed comparison that output constraints. Otherwise some constraints that should appear during pretyping don't and we generate unnecessary constraints/universe variables. Have to adapt a few tactics to this new behavior by making them universe aware. - Fix elimschemes to minimize universe variables - Fix coercions to not forget the universe constraints generated by an application - Change universe substitutions to maps instead of assoc lists. - Fix absurd tactic to handle univs properly - Make length and app polymorphic in List, unification sets their levels otherwise. Move to modules for namespace management instead of long names in universe code. More putting things into modules. Change evar_map structure to support an incremental substitution of universes (populated from Eq constraints), allowing safe and fast inference of precise levels, without computing lubs. - Add many printers and reorganize code - Extend nf_evar to normalize universe variables according to the substitution. - Fix ChoiceFacts.v in Logic, no universe inconsistencies anymore. But Diaconescu still has one (something fixes a universe to Set). - Adapt omega, functional induction to the changes. Fix congruence, eq_constr implem, discharge of polymorphic inductives. Fix merge in auto. The [-parameters-matter] option (formerly relevant_equality). Add -parameters-matter to coqc Do compute the param levels at elaboration time if parameters_matter. - Fix generalize tactic - add ppuniverse_subst - Start fixing normalize_universe_context w.r.t. normalize_univ_variables. - Fix HUGE bug in Ltac interpretation not folding the sigma correctly if interpreting a tactic application to multiple arguments. - Fix bug in union of universe substitution. - rename parameters-matter to indices-matter - Fix computation of levels from indices not parameters. - Fixing parsing so that [Polymorphic] can be applied to gallina extensions. - When elaborating definitions, make the universes from the type rigid when checking the term: they should stay abstracted. - Fix typeclasses eauto's handling of universes for exact hints. Rework all the code for infering the levels of inductives and checking their allowed eliminations sorts. This is based on the computation of a natural level for an inductive type I. The natural level [nat] of [I : args -> sort := c1 : A1 -> I t1 .. cn : An -> I tn] is computed by taking the max of the levels of the args (if indices matter) and the levels of the constructor arguments. The declared level [decl] of I is [sort], which might be Prop, Set or some Type u (u fresh or not). If [decl >= nat && not (decl = Prop && n >= 2)], the level of the inductive is [decl], otherwise, _smashing_ occured. If [decl] is impredicative (Prop or Set when Set is impredicative), we accept the declared level, otherwise it's an error. To compute the allowed elimination sorts, we have the following situations: - No smashing occured: all sorts are allowed. (Recall props that are not smashed are Empty/Unitary props) - Some smashing occured: - if [decl] is Type, we allow all eliminations (above or below [decl], not sure why this is justified in general). - if [decl] is Set, we used smashing for impredicativity, so only small sorts are allowed (Prop, Set). - if [decl] is Prop, only logical sorts are allowed: I has either large universes inside it or more than 1 constructor. This does not treat the case where only a Set appeared in I which was previously accepted it seems. All the standard library works with these changes. Still have to cleanup kernel/indtypes.ml. It is a good time to have a whiskey with OJ. Thanks to Peter Lumsdaine for bug reporting: - fix externalisation of universe instances (still appearing when no Printing Universes) - add [convert] and [convert_leq] tactics that keep track of evars and universe constraints. - use them in [exact_check]. Fix odd behavior in inductive type declarations allowing to silently lower a Type i parameter to Set for squashing a naturally Type i inductive to Set. Reinstate the LargeNonPropInductiveNotInType exception. Fix the is_small function not dealing properly with aliases of Prop/Set in Type. Add check_leq in Evd and use it to decide if we're trying to squash an inductive naturally in some Type to Set. - Fix handling of universe polymorphism in typeclasses Class/Instance declarations. - Don't allow lowering a rigid Type universe to Set silently. - Move Ring/Field back to Type. It was silently putting R in Set due to the definition of ring_morph. - Rework inference of universe levels for inductive definitions. - Make fold_left/right polymorphic on both levels A and B (the list's type). They don't have to be at the same level. Handle selective Polymorphic/Monomorphic flag right for records. Remove leftover command Fix after update with latest trunk. Backport patches on HoTT/coq to rebased version of universe polymorphism. - Fix autorewrite wrong handling of universe-polymorphic rewrite rules. Fixes part of issue #7. - Fix the [eq_constr_univs] and add an [leq_constr_univs] to avoid eager equation of universe levels that could just be inequal. Use it during kernel conversion. Fixes issue #6. - Fix a bug in unification that was failing too early if a choice in unification of universes raised an inconsistency. - While normalizing universes, remove Prop in the le part of Max expressions. - Stop rigidifying the universes on the right hand side of a : in definitions. - Now Hints can be declared polymorphic or not. In the first case they must be "refreshed" (undefined universes are renamed) at each application. - Have to refresh the set of universe variables associated to a hint when it can be used multiple times in a single proof to avoid fixing a level... A better & less expensive solution should exist. - Do not include the levels of let-ins as part of records levels. - Fix a NotConvertible uncaught exception to raise a more informative error message. - Better substitution of algebraics in algebraics (for universe variables that can be algebraics). - Fix issue #2, Context was not properly normalizing the universe context. - Fix issue with typeclasses that were not catching UniverseInconsistencies raised by unification, resulting in early failure of proof-search. - Let the result type of definitional classes be an algebraic. - Adapt coercions to universe polymorphic flag (Identity Coercion etc..) - Move away a dangerous call in autoinstance that added constraints for every polymorphic definitions once in the environment for no use. Forgot one part of the last patch on coercions. - Adapt auto/eauto to polymorphic hints as well. - Factor out the function to refresh a clenv w.r.t. undefined universes. Use leq_univ_poly in evarconv to avoid fixing universes. Disallow polymorphic hints based on a constr as it is not possible to infer their universe context. Only global references can be made polymorphic. Fixes issue #8. Fix SearchAbout bug (issue #10). Fix program w.r.t. universes: the universe context of a definition changes according to the successive refinements due to typechecking obligations. This requires the Proof modules to return the generated universe substitution when finishing a proof, and this information is passed in the closing hook. The interface is not very clean, will certainly change in the future. - Better treatment of polymorphic hints in auto: terms can be polymorphic now, we refresh their context as well. - Needs a little change in test-pattern that seems breaks multiary uses of destruct in NZDiv.v, l495. FIX to do. Fix [make_pattern_test] to keep the universe information around and still allow tactics to take multiple patterns at once. - Fix printing of universe instances that should not be factorized blindly - Fix handling of the universe context in program definitions by allowing the hook at the end of an interactive proof to give back the refined universe context, before it is transformed in the kernel. - Fix a bug in evarconv where solve_evar_evar was not checking types of instances, resulting in a loss of constraints in unification of universes and a growing number of useless parametric universes. - Move from universe_level_subst to universe_subst everywhere. - Changed representation of universes for a canonical one - Adapt the code so that universe variables might be substituted by arbitrary universes (including algebraics). Not used yet except for polymorphic universe variables instances. - Adapt code to new constraint structure. - Fix setoid rewrite handling of evars that was forgetting the initial universe substitution ! - Fix code that was just testing conversion instead of keeping the resulting universe constraints around in the proof engine. - Make a version of reduction/fconv that deals with the more general set of universe constraints. - [auto using] should use polymorphic versions of the constants. - When starting a proof, don't forget about the algebraic universes in the universe context. Rationalize substitution and normalization functions for universes. Also change back the structure of universes to avoid considering levels n+k as pure levels: they are universe expressions like max. Everything is factored out in the Universes and Univ modules now and the normalization functions can be efficient in the sense that they can cache the normalized universes incrementally. - Adapt normalize_context code to new normalization/substitution functions. - Set more things to be polymorphic, e.g. in Ring or SetoidList for the rest of the code to work properly while the constraint generation code is not adapted. And temporarily extend the universe constraint code in univ to solve max(is) = max(js) by first-order unification (these constraints should actually be implied not enforced). - Fix romega plugin to use the right universes for polymorphic lists. - Fix auto not refreshing the poly hints correctly. - Proper postponing of universe constraints during unification, avoid making arbitrary choices. - Fix nf_evars_and* to keep the substitution around for later normalizations. - Do add simplified universe constraints coming from unification during typechecking. - Fix solve_by_tac in obligations to handle universes right, and the corresponding substitution function. Test global universe equality early during simplication of constraints. Better hashconsing, but still not good on universe lists. - Add postponing of "lub" constraints that should not be checked early, they are implied by the others. - Fix constructor tactic to use a fresh constructor instance avoiding fixing universes. - Use [eq_constr_universes] instead of [eq_constr_univs] everywhere, this is the comparison function that doesn't care about the universe instances. - Almost all the library compiles in this new setting, but some more tactics need to be adapted. - Reinstate hconsing. - Keep Prop <= u constraints that can be used to set the level of a universe metavariable. Add better hashconsing and unionfind in normalisation of constraints. Fix a few problems in choose_canonical, normalization and substitution functions. Fix after merge Fixes after rebase with latest Coq trunk, everything compiles again, albeit slowly in some cases. - Fix module substitution and comparison of table keys in conversion using the wrong order (should always be UserOrd now) - Cleanup in universes, removing commented code. - Fix normalization of universe context which was assigning global levels to local ones. Should always be the other way! - Fix universe implementation to implement sorted cons of universes preserving order. Makes Univ.sup correct again, keeping universe in normalized form. - In evarconv.ml, allow again a Fix to appear as head of a weak-head normal form (due to partially applied fixpoints). - Catch anomalies of conversion as errors in reductionops.ml, sad but necessary as eta-expansion might build ill-typed stacks like FProd, [shift;app Rel 1], as it expands not only if the other side is rigid. - Fix module substitution bug in auto.ml - Fix case compilation: impossible cases compilation was generating useless universe levels. Use an IDProp constant instead of the polymorphic identity to not influence the level of the original type when building the case construct for the return type. - Simplify normalization of universe constraints. - Compute constructor levels of records correctly. Fall back to levels for universe instances, avoiding issues of unification. Add more to the test-suite for universe polymorphism. Fix after rebase with trunk Fix substitution of universes inside fields/params of records to be made after all normalization is done and the level of the record has been computed. Proper sharing of lower bounds with fixed universes. Conflicts: library/universes.ml library/universes.mli Constraints were not enforced in compilation of cases Fix after rebase with trunk - Canonical projections up to universes - Fix computation of class/record universe levels to allow squashing to Prop/Set in impredicative set mode. - Fix descend_in_conjunctions to properly instantiate projections with universes - Avoid Context-bound variables taking extra universes in their associated universe context. - Fix evar_define using the wrong direction when refreshing a universe under cumulativity - Do not instantiate a local universe with some lower bound to a global one just because they have the same local glb (they might not have the same one globally). - Was loosing some global constraints during normalization (brought again by the kernel), fixed now. - Proper [abstract] with polymorphic lemmas (polymorphic if the current proof is). - Fix silly bug in autorewrite: any hint after the first one was always monomorphic. - Fix fourier after rebase - Refresh universes when checking types of metas in unification (avoid (sup (sup univ))). - Speedup a script in FSetPositive.v Rework definitions in RelationClasses and Morphisms to share universe levels as much as possible. This factorizes many useless x <= RelationClasses.foo constraints in code that uses setoid rewriting. Slight incompatible change in the implicits for Reflexivity and Irreflexivity as well. - Share even more universes in Morphisms using a let. - Use splay_prod instead of splay_prod_assum which doesn't reduce let's to find a relation in setoid_rewrite - Fix [Declare Instance] not properly dealing with let's in typeclass contexts. Fixes in inductiveops, evarutil. Patch by Yves Bertot to allow naming universes in inductive definitions. Fixes in tacinterp not propagating evars correctly. Fix for issue #27: lowering a Type to Prop is allowed during inference (resulting in a Type (* Set *)) but kernel reduction was wrongly refusing the equation [Type (*Set*) = Set]. Fix in interface of canonical structures: an instantiated polymorphic projection is not needed to lookup a structure, just the projection name is enough (reported by C. Cohen). Move from universe inference to universe checking in the kernel. All tactics have to be adapted so that they carry around their generated constraints (living in their sigma), which is mostly straightforward. The more important changes are when refering to Coq constants, the tactics code is adapted so that primitive eq, pairing and sigma types might be polymorphic. Fix another few places in tacinterp and evarconv/evarsolve where the sigma was not folded correctly. - Fix discharge adding spurious global constraints on polymorphic universe variables appearing in assumptions. - Fixes in inductiveops not taking into account universe polymorphic inductives. WIP on checked universe polymorphism, it is clearly incompatible with the previous usage of polymorphic inductives + non-polymorphic definitions on them as universe levels now appear in the inductive type, and add equality constraints between universes that were otherwise just in a cumulativity relation (not sure that was actually correct). Refined version of unification of universe instances for first-order unification, prefering unfolding to arbitrary identification of universes. Moved kernel to universe checking only. Adapt the code to properly infer constraints during typechecking and refinement (tactics) and only check constraints when adding constants/inductives to the environment. Exception made of module subtyping that needs inference of constraints... The kernel conversion (fconv) has two modes: checking only and inference, the later being used by modules only. Evarconv/unification make use of a different strategy for conversion of constants that prefer unfolding to blind unification of rigid universes. Likewise, conversion checking backtracks on different universe instances (modulo the constraints). - adapt congruence/funind/ring plugins to this new mode, forcing them to declare their constraints. - To avoid big performance penalty with reification, make ring/field non-polymorphic (non-linear explosion in run time to be investigated further). - pattern and change tactics need special treatment: as they are not _reduction_ but conversion functions, their operation requires to update an evar_map with new universe constraints. - Fix vm_compute to work better with universes. If the normal form is made only of constructors then the readback is correct. However a deeper change will be needed to treat substitution of universe instances when unfolding constants. Remove libtypes.ml Fix after merge. Fix after rebase with trunk. **** Add projections to the kernel, as optimized implementations of constants. - New constructor Proj expects a projection constant applied to its principal inductive argument. - Reduction machines shortcut the expansion to a case and directly project the right argument. - No need to keep parameters as part of the projection's arguments as they are inferable from the type of the principal argument. - ML code now compiles, debugging needed. Start debugging the implementation of projections. Externalisation should keep the information about projections. Internalization, pattern-matching, unification and reduction of projections. Fix some code that used to have _ for parameters that are no longer present in projections. Fixes in unification, reduction, term indexing, auto hints based on projections, add debug printers. Fix byte-compilation of projections, unification, congruence with projections. Adapt .v files using "@proj _ _ record" syntax, should come back on this later. Fix coercion insertion code to properly deal with projection coercions. Fix [simpl proj]... TODO [unfold proj], proj is not considered evaluable. - Fix whnf of projections, now respecting opacity information. - Fix conversion of projections to try first-order first and then incrementally unfold them. - Fix computation of implicit args for projections, simply dropping the information for parameters. - Fix a few scripts that relied on projections carrying their parameters (few at's, rewrites). - Fix unify_with_subterm to properly match under projections. - Fix bug in cooking of projections. - Add pattern PProj for projections. - A very strange bug appeared in BigZ.v, making coqtop segfault on the export of BigN... tofix Fixes after rebase with trunk. Everything compiles now, with efficient projections. Fixes after rebase with trunk (esp. reductionops). Remove warnings, backport patch from old univs+projs branch. Proper expansion of projections during unification. They are considered as maybe flexible keys in evarconv/unification. We try firstorder unification and otherwise expand them as necessary, completely mimicking the original behavior, when they were constants. Fix head_constr_bound interface, the arguments are never needed (they're outside their environment actually). [simpl] and [red]/[intro] should behave just like before now. Fix evarconv that was giving up on proj x = ?e problems too early. - Port patch by Maxime Denes implementing fast projections in the native conversion. - Backport patch to add eta-expansion for records. Do not raise an exception but simply fails if trying to do eta on an inductive that is not a record. Fix projections detyping/matching and unification.ml not always recovering on first-order universe inequalities. Correct eta-expansion for records, and change strategy for conversion with projections to favor reduction over first-order unification a little more. Fix a bug in Ltac pattern matching on projections. Fix evars_reset_evd to not recheck existing constraints in case it is just an update (performance improvement for typeclass resolution). - Respect Global/Transparent oracle during unification. Opaque means _never_ unfolded there. - Add empty universes as well as the initial universes (having Prop < Set). - Better display of universe inconsistencies. - Add Beta Ziliani's patch to go fast avoiding imitation when possible. - Allow instantiation by lower bound even if there are universes above - (tentative) In refinement, avoid incremental refinement of terms containing no holes and do it in one step (much faster on big terms). Turned on only if not a checked command. Remove dead code in univ/universes.ml and cleanup setup of hashconsing, for a small speed and memory footprint improvement. - Fix bug in unification using cumulativity when conversion should have been used. - Fix unification of evars having type Type, no longer forcing them to be equal (potentially more constraints): algorithm is now complete w.r.t. cumulativity. - In clenvtac, use refine_nocheck as we are guaranteed to get well-typed terms from unification now, including sufficient universe constraints. Small general speedup. - Fix inference of universe levels of inductive types to avoid smashing inadvertently from Set to Prop. - Fix computation of discharged hypotheses forgetting the arity in inductives. - Fix wrong order in printing of universe inconsistency explanation - Allow coercions between two polymorphic instances of the same inductive/constant. - Do evar normalization and saturation by classes before trying to use program coercion during pretyping. - In unification, force equalities of universes when unifying the same rigid head constants. - Fix omission of projections in constr_leq - Fix [admit] tactic's handling of normalized universes. Fix typing of projections not properly normalizing w.r.t. evars, resulting in anomaly sometimes. Adapt rewrite to work with computational relations (in Type), while maintaining backward compatibility with Propositional rewriting. Introduce a [diff] function on evar maps and universe contexts to properly deal with clause environments. Local hints in auto now store just the extension of the evar map they rely on, so merging them becomes efficient. This fixes an important performance issue in auto and typeclass resolution in presence of a large number of universe constraints. Change FSetPositive and MSetPositive to put their [elt] and [t] universes in Type to avoid restricting global universes to [Set]. This is due to [flip]s polymorphic type being fixed in monomorphic instances of Morphisms.v, and rewriting hence forcing unification of levels that could be left unrelated. - Try a fast_typeops implementation of kernel type inference that allocates less by not rebuilding the term, shows a little performance improvement, and less allocation. - Build universe inconsistency explanations lazily, avoiding huge blowup (x5) in check_constraints/merge_constraints in time and space (these are stressed in universe polymorphic mode). - Hashcons universe instances. Add interface file for fast_typeops Use monomorphic comparisons, little optimizations of hashconsing and comparison in univ.ml. Fix huge slowdown due to building huge error messages. Lazy is not enough to tame this completely. Fix last performance issue, due to abstracts building huge terms abstracting on parts of the section context. Was due to wrong handling of Let... Qed.s in abstract. Performance is a tiny bit better than the trunk now. First step at compatibility layer for projections. Compatibility mode for projections. c.(p), p c use primitive projs, while @p refers to an expansion [λ params c, c.(p)]. Recovers almost entire source compatibility with trunk scripts, except when mixing @p and p and doing syntactic matching (they're unifiable though). Add a [Set Primitive Projections] flag to set/unset the use of primitive projections, selectively for each record. Adapt code to handle both the legacy encoding and the primitive projections. Library is almost source-to-source compatible, except for syntactic operations relying on the presence of parameters. In primitive projections mode, @p refers to an expansion [λ params r. p.(r)]. More information in CHANGES (to be reformated/moved to reference manual). Backport changes from HoTT/coq: - Fix anomaly on uncatched NotASort in retyping. - Better recognition of evars that are subject to typeclass resolution. Fixes bug reported by J. Gross on coq-club. - Print universe polymorphism information for parameters as well. Fix interface for unsatisfiable constraints error, now a type error. Try making ring polymorphic again, with a big slowdown, to be investigated. Fix evar/universe leak in setoid rewrite. - Add profiling flag - Move setoid_ring back to non-polymorphic mode to compare perfs with trunk - Change unification to allow using infer_conv more often (big perf culprit), but semantics of backtracking on unification of constants is not properly implemented there. - Fix is_empty/union_evar_universe_context forgetting about some assignments. - Performance is now very close to the trunk from june, with projections deactivated.
* Adding a canary library. This canary is imperfect. It allows serializationGravatar Pierre-Marie Pédrot2014-03-05
| | | | (hopefully), and forbids generic equality. Still, it allows generic hash.
* Added a new module HMap. It works (almost) like Map, except that it expectsGravatar Pierre-Marie Pédrot2014-03-05
| | | | | | | | | | | the provided type to come with a hashing function. The internal representation is changed, such that values are first compared w.r.t. to their hash. This effectively saves a lot of comparisons which may be far more expensive than O(1), as in the string case, hence resulting in an overall speedup. CAVEAT: everything is not implemented yet, and order-sensitive functions now do not respect the provided order anymore.