aboutsummaryrefslogtreecommitdiffhomepage
path: root/doc/refman/RefMan-tacex.tex
blob: 9f4ddc8044c3489d4de0dbb5aa67e08557d13161 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
\chapter[Detailed examples of tactics]{Detailed examples of tactics\label{Tactics-examples}}

This chapter presents detailed examples of certain tactics, to
illustrate their behavior.

\section[\tt dependent induction]{\tt dependent induction\label{dependent-induction-example}}
\def\depind{{\tt dependent induction}~}
\def\depdestr{{\tt dependent destruction}~}

The tactics \depind and \depdestr are another solution for inverting
inductive predicate instances and potentially doing induction at the
same time. It is based on the \texttt{BasicElim} tactic of Conor McBride which
works by abstracting each argument of an inductive instance by a variable
and constraining it by equalities afterwards. This way, the usual 
{\tt induction} and {\tt destruct} tactics can be applied to the
abstracted instance and after simplification of the equalities we get
the expected goals.

The abstracting tactic is called {\tt generalize\_eqs} and it takes as
argument an hypothesis to generalize. It uses the {\tt JMeq} datatype
defined in {\tt Coq.Logic.JMeq}, hence we need to require it before.
For example, revisiting the first example of the inversion documentation above:

\begin{coq_example*}
Require Import Coq.Logic.JMeq.
\end{coq_example*}
\begin{coq_eval}
Require Import Coq.Program.Equality.
\end{coq_eval}

\begin{coq_eval}
Inductive Le : nat -> nat -> Set :=
  | LeO : forall n:nat, Le 0 n
  | LeS : forall n m:nat, Le n m -> Le (S n) (S m).
Variable P : nat -> nat -> Prop.
Variable Q : forall n m:nat, Le n m -> Prop.
\end{coq_eval}

\begin{coq_example*}
Goal forall n m:nat, Le (S n) m -> P n m.
intros n m H.
\end{coq_example*}
\begin{coq_example}
generalize_eqs H.
\end{coq_example}

The index {\tt S n} gets abstracted by a variable here, but a
corresponding equality is added under the abstract instance so that no
information is actually lost. The goal is now almost amenable to do induction
or case analysis. One should indeed first move {\tt n} into the goal to
strengthen it before doing induction, or {\tt n} will be fixed in
the inductive hypotheses (this does not matter for case analysis). 
As a rule of thumb, all the variables that appear inside constructors in
the indices of the hypothesis should be generalized. This is exactly
what the \texttt{generalize\_eqs\_vars} variant does:

\begin{coq_eval} 
Undo 1.
\end{coq_eval}
\begin{coq_example}
generalize_eqs_vars H.
induction H.
\end{coq_example}

As the hypothesis itself did not appear in the goal, we did not need to
use an heterogeneous equality to relate the new hypothesis to the old
one (which just disappeared here). However, the tactic works just as well
in this case, e.g.:

\begin{coq_eval}
Admitted.
\end{coq_eval}

\begin{coq_example}
Goal forall n m (p : Le (S n) m), Q (S n) m p.
intros n m p ; generalize_eqs_vars p.
\end{coq_example}

One drawback of this approach is that in the branches one will have to
substitute the equalities back into the instance to get the right
assumptions. Sometimes injection of constructors will also be needed to
recover the needed equalities. Also, some subgoals should be directly
solved because of inconsistent contexts arising from the constraints on 
indexes. The nice thing is that we can make a tactic based on
discriminate, injection and variants of substitution to automatically 
do such simplifications (which may involve the K axiom). 
This is what the {\tt simplify\_dep\_elim} tactic from
{\tt Coq.Program.Equality} does. For example, we might simplify the
previous goals considerably:
% \begin{coq_eval} 
% Abort.
% Goal forall n m:nat, Le (S n) m -> P n m.
% intros n m H ; generalize_eqs_vars H.
% \end{coq_eval}

\begin{coq_example}
induction p ; simplify_dep_elim.
\end{coq_example}

The higher-order tactic {\tt do\_depind} defined in {\tt
  Coq.Program.Equality} takes a tactic and combines the
building blocks we have seen with it: generalizing by equalities
calling the given tactic with the
generalized induction hypothesis as argument and cleaning the subgoals
with respect to equalities. Its most important instantiations are
\depind and \depdestr that do induction or simply case analysis on the
generalized hypothesis. For example we can redo what we've done manually
with \depdestr:

\begin{coq_eval}
Abort.
\end{coq_eval}
\begin{coq_example*}
Require Import Coq.Program.Equality.
Lemma ex : forall n m:nat, Le (S n) m -> P n m.
intros n m H.
\end{coq_example*}
\begin{coq_example}
dependent destruction H.
\end{coq_example}
\begin{coq_eval}
Abort.
\end{coq_eval}

This gives essentially the same result as inversion. Now if the
destructed hypothesis actually appeared in the goal, the tactic would
still be able to invert it, contrary to {\tt dependent
 inversion}. Consider the following example on vectors:

\begin{coq_example*}
Require Import Coq.Program.Equality.
Set Implicit Arguments.
Variable A : Set.
Inductive vector : nat -> Type := 
| vnil : vector 0 
| vcons : A -> forall n, vector n -> vector (S n).
Goal forall n, forall v : vector (S n), 
  exists v' : vector n, exists a : A, v = vcons a v'.
  intros n v.
\end{coq_example*}
\begin{coq_example}
  dependent destruction v.
\end{coq_example}
\begin{coq_eval}
Abort.
\end{coq_eval}

In this case, the {\tt v} variable can be replaced in the goal by the
generalized hypothesis only when it has a type of the form {\tt vector
 (S n)}, that is only in the second case of the {\tt destruct}. The
first one is dismissed because {\tt S n <> 0}.

\subsection{A larger example}

Let's see how the technique works with {\tt induction} on inductive
predicates on a real example. We will develop an example application to the
theory of simply-typed lambda-calculus formalized in a dependently-typed style:

\begin{coq_example*}
Inductive type : Type :=
| base : type
| arrow : type -> type -> type.
Notation " t --> t' " := (arrow t t') (at level 20, t' at next level).
Inductive ctx : Type :=
| empty : ctx
| snoc : ctx -> type -> ctx.
Notation " G , tau " := (snoc G tau) (at level 20, tau at next level).
Fixpoint conc (G D : ctx) : ctx :=
  match D with
    | empty => G
    | snoc D' x => snoc (conc G D') x
  end.
Notation " G ; D " := (conc G D) (at level 20).
Inductive term : ctx -> type -> Type :=
| ax : forall G tau, term (G, tau) tau
| weak : forall G tau, 
  term G tau -> forall tau', term (G, tau') tau
| abs : forall G tau tau', 
  term (G , tau) tau' -> term G (tau --> tau')
| app : forall G tau tau', 
  term G (tau --> tau') -> term G tau -> term G tau'.
\end{coq_example*}

We have defined types and contexts which are snoc-lists of types. We
also have a {\tt conc} operation that concatenates two contexts.
The {\tt term} datatype represents in fact the possible typing
derivations of the calculus, which are isomorphic to the well-typed
terms, hence the name. A term is either an application of:
\begin{itemize}
\item the axiom rule to type a reference to the first variable in a context,
\item the weakening rule to type an object in a larger context
\item the abstraction or lambda rule to type a function
\item the application to type an application of a function to an argument
\end{itemize}

Once we have this datatype we want to do proofs on it, like weakening:

\begin{coq_example*}
Lemma weakening : forall G D tau, term (G ; D) tau -> 
  forall tau', term (G , tau' ; D) tau.
\end{coq_example*}
\begin{coq_eval}
  Abort.
\end{coq_eval}

The problem here is that we can't just use {\tt induction} on the typing
derivation because it will forget about the {\tt G ; D} constraint
appearing in the instance. A solution would be to rewrite the goal as:
\begin{coq_example*}
Lemma weakening' : forall G' tau, term G' tau -> 
  forall G D, (G ; D) = G' ->
  forall tau', term (G, tau' ; D) tau.
\end{coq_example*}
\begin{coq_eval}
  Abort.
\end{coq_eval}

With this proper separation of the index from the instance and the right
induction loading (putting {\tt G} and {\tt D} after the inducted-on
hypothesis), the proof will go through, but it is a very tedious
process. One is also forced to make a wrapper lemma to get back the
more natural statement. The \depind tactic alleviates this trouble by
doing all of this plumbing of generalizing and substituting back automatically.
Indeed we can simply write:

\begin{coq_example*}
Require Import Coq.Program.Tactics.
Lemma weakening : forall G D tau, term (G ; D) tau -> 
  forall tau', term (G , tau' ; D) tau.
Proof with simpl in * ; simpl_depind ; auto.
  intros G D tau H. dependent induction H generalizing G D ; intros.
\end{coq_example*}

This call to \depind has an additional arguments which is a list of
variables appearing in the instance that should be generalized in the
goal, so that they can vary in the induction hypotheses. By default, all
variables appearing inside constructors (except in a parameter position)
of the instantiated hypothesis will be generalized automatically but
one can always give the list explicitly.

\begin{coq_example}
  Show.
\end{coq_example}

The {\tt simpl\_depind} tactic includes an automatic tactic that tries
to simplify equalities appearing at the beginning of induction
hypotheses, generally using trivial applications of
reflexivity. In cases where the equality is not between constructor
forms though, one must help the automation by giving
some arguments, using the {\tt specialize} tactic for example.

\begin{coq_example*}
destruct D... apply weak ; apply ax. apply ax.
destruct D...
\end{coq_example*}
\begin{coq_example}
Show.
\end{coq_example}
\begin{coq_example}
  specialize (IHterm G0 empty eq_refl).
\end{coq_example}

Once the induction hypothesis has been narrowed to the right equality,
it can be used directly. 

\begin{coq_example}
  apply weak, IHterm.
\end{coq_example}

If there is an easy first-order solution to these equations as in this subgoal, the
{\tt specialize\_eqs} tactic can be used instead of giving explicit proof
terms:

\begin{coq_example}
  specialize_eqs IHterm.
\end{coq_example}
This concludes our example.
\SeeAlso The induction \ref{elim}, case \ref{case} and inversion \ref{inversion} tactics.

\section[\tt autorewrite]{\tt autorewrite\label{autorewrite-example}}

Here are two examples of {\tt autorewrite} use. The first one ({\em Ackermann
function}) shows actually a quite basic use where there is no conditional
rewriting. The second one ({\em Mac Carthy function}) involves conditional
rewritings and shows how to deal with them using the optional tactic of the
{\tt Hint~Rewrite} command.

\firstexample
\example{Ackermann function}
%Here is a basic use of {\tt AutoRewrite} with the Ackermann function:

\begin{coq_example*}
Reset Initial.
Require Import Arith.
Variable Ack : 
           nat -> nat -> nat.
Axiom Ack0 : 
        forall m:nat, Ack 0 m = S m.
Axiom Ack1 : forall n:nat, Ack (S n) 0 = Ack n 1.
Axiom Ack2 : forall n m:nat, Ack (S n) (S m) = Ack n (Ack (S n) m).
\end{coq_example*}

\begin{coq_example}
Hint Rewrite Ack0 Ack1 Ack2 : base0.
Lemma ResAck0 : 
 Ack 3 2 = 29.
autorewrite with base0 using try reflexivity.
\end{coq_example}

\begin{coq_eval}
Reset Initial.
\end{coq_eval}

\example{Mac Carthy function}
%The Mac Carthy function shows a more complex case:

\begin{coq_example*}
Require Import Omega.
Variable g :   
           nat -> nat -> nat.
Axiom g0 : 
        forall m:nat, g 0 m = m.
Axiom
  g1 :
    forall n m:nat,
      (n > 0) -> (m > 100) -> g n m = g (pred n) (m - 10).
Axiom
  g2 :
    forall n m:nat,
      (n > 0) -> (m <= 100) -> g n m = g (S n) (m + 11).
\end{coq_example*}

\begin{coq_example}
Hint Rewrite g0 g1 g2 using omega : base1.
Lemma Resg0 : 
 g 1 110 = 100.
autorewrite with base1 using reflexivity || simpl.
\end{coq_example}

\begin{coq_eval}
Abort.
\end{coq_eval}

\begin{coq_example}
Lemma Resg1 : g 1 95 = 91.
autorewrite with base1 using reflexivity || simpl.
\end{coq_example}

\begin{coq_eval}
Reset Initial.
\end{coq_eval}

\section[\tt quote]{\tt quote\tacindex{quote}
\label{quote-examples}}

The tactic \texttt{quote} allows using Barendregt's so-called
2-level approach without writing any ML code. Suppose you have a
language \texttt{L} of 
'abstract terms' and a type \texttt{A} of 'concrete terms' 
and a function \texttt{f : L -> A}. If \texttt{L} is a simple
inductive datatype and \texttt{f} a simple fixpoint, \texttt{quote f}
will replace the head of current goal by a convertible term of the form 
\texttt{(f t)}. \texttt{L} must have a constructor of type: \texttt{A
  -> L}. 

Here is an example:

\begin{coq_example}
Require Import Quote.
Parameters A B C : Prop.
Inductive formula : Type :=
  | f_and : formula -> formula -> formula (* binary constructor *)
  | f_or : formula -> formula -> formula
  | f_not : formula -> formula (* unary constructor *)
  | f_true : formula (* 0-ary constructor *)
  | f_const : Prop -> formula (* constructor for constants *).
Fixpoint interp_f (f:
                   formula) : Prop :=
  match f with
  | f_and f1 f2 => interp_f f1 /\ interp_f f2
  | f_or f1 f2 => interp_f f1 \/ interp_f f2
  | f_not f1 => ~ interp_f f1
  | f_true => True
  | f_const c => c
  end.
Goal A /\ (A \/ True) /\ ~ B /\ (A <-> A).
quote interp_f.
\end{coq_example}

The algorithm to perform this inversion is: try to match the
term with right-hand sides expression of \texttt{f}. If there is a
match, apply the corresponding left-hand side and call yourself
recursively on sub-terms. If there is no match, we are at a leaf:
return the corresponding constructor (here \texttt{f\_const}) applied
to the term. 

\begin{ErrMsgs}
\item \errindex{quote: not a simple fixpoint} \\
  Happens when \texttt{quote} is not able to perform inversion properly.
\end{ErrMsgs}

\subsection{Introducing variables map}

The normal use of \texttt{quote} is to make proofs by reflection: one
defines a function \texttt{simplify : formula -> formula} and proves a 
theorem \texttt{simplify\_ok: (f:formula)(interp\_f (simplify f)) ->
  (interp\_f f)}. Then, one can simplify formulas by doing:
\begin{verbatim}
   quote interp_f.
   apply simplify_ok.
   compute.
\end{verbatim}
But there is a problem with leafs: in the example above one cannot
write a function that implements, for example, the logical simplifications 
$A \land A \ra A$ or $A \land \lnot A \ra \texttt{False}$. This is
because the \Prop{} is impredicative.

It is better to use that type of formulas:

\begin{coq_eval}
Reset formula.
\end{coq_eval}
\begin{coq_example}
Inductive formula : Set :=
  | f_and : formula -> formula -> formula
  | f_or : formula -> formula -> formula
  | f_not : formula -> formula
  | f_true : formula
  | f_atom : index -> formula.
\end{coq_example*}

\texttt{index} is defined in module \texttt{quote}. Equality on that
type is decidable so we are able to simplify $A \land A$ into $A$ at
the abstract level. 

When there are variables, there are bindings, and \texttt{quote}
provides also a type \texttt{(varmap A)} of bindings from
\texttt{index} to any set \texttt{A}, and a function
\texttt{varmap\_find} to search in such maps. The interpretation
function has now another argument, a variables map:

\begin{coq_example}
Fixpoint interp_f (vm:
                    varmap Prop) (f:formula) {struct f} : Prop :=
  match f with
  | f_and f1 f2 => interp_f vm f1 /\ interp_f vm f2
  | f_or f1 f2 => interp_f vm f1 \/ interp_f vm f2
  | f_not f1 => ~ interp_f vm f1
  | f_true => True
  | f_atom i => varmap_find True i vm
  end.
\end{coq_example}

\noindent\texttt{quote} handles this second case properly:

\begin{coq_example}
Goal A /\ (B \/ A) /\ (A \/ ~ B).
quote interp_f.
\end{coq_example}

It builds \texttt{vm} and \texttt{t} such that \texttt{(f vm t)} is
convertible with the conclusion of current goal.

\subsection{Combining variables and constants}

One can have both variables and constants in abstracts terms; that is
the case, for example, for the \texttt{ring} tactic (chapter
\ref{ring}). Then one must provide to \texttt{quote} a list of
\emph{constructors of constants}. For example, if the list is
\texttt{[O S]} then closed natural numbers will be considered as
constants and other terms as variables. 

Example: 

\begin{coq_eval}
Reset formula.
\end{coq_eval}
\begin{coq_example*}
Inductive formula : Type :=
  | f_and : formula -> formula -> formula
  | f_or : formula -> formula -> formula
  | f_not : formula -> formula
  | f_true : formula
  | f_const : Prop -> formula (* constructor for constants *)
  | f_atom : index -> formula.
Fixpoint interp_f
 (vm:            (* constructor for variables *)
  varmap Prop) (f:formula) {struct f} : Prop :=
  match f with
  | f_and f1 f2 => interp_f vm f1 /\ interp_f vm f2
  | f_or f1 f2 => interp_f vm f1 \/ interp_f vm f2
  | f_not f1 => ~ interp_f vm f1
  | f_true => True
  | f_const c => c
  | f_atom i => varmap_find True i vm
  end.
Goal 
A /\ (A \/ True) /\ ~ B /\ (C <-> C).
\end{coq_example*}

\begin{coq_example}
quote interp_f [ A B ].
Undo.
  quote interp_f [ B C iff ].
\end{coq_example}

\Warning Since function inversion
is undecidable in general case, don't expect miracles from it!

\begin{Variants}

\item {\tt quote {\ident} in {\term} using {\tac}}

  \tac\ must be a functional tactic (starting with {\tt fun x =>})
  and will be called with the quoted version of \term\ according to
  \ident.

\item {\tt quote {\ident} [ \ident$_1$ \dots\ \ident$_n$ ] in {\term} using {\tac}}

  Same as above, but will use \ident$_1$, \dots, \ident$_n$ to
  chose which subterms are constants (see above).

\end{Variants}

% \SeeAlso file \texttt{theories/DEMOS/DemoQuote.v}

\SeeAlso comments of source file \texttt{plugins/quote/quote.ml}

\SeeAlso the \texttt{ring} tactic (Chapter~\ref{ring})



\section{Using the tactical language}

\subsection{About the cardinality of the set of natural numbers}

A first example which shows how to use the pattern matching over the proof
contexts is the proof that natural numbers have more than two elements. The
proof of such a lemma can be done as %shown on Figure~\ref{cnatltac}.
follows:
%\begin{figure}
%\begin{centerframe}
\begin{coq_eval}
Reset Initial.
Require Import Arith.
Require Import List.
\end{coq_eval}
\begin{coq_example*}
Lemma card_nat :
 ~ (exists x : nat, exists y : nat, forall z:nat, x = z \/ y = z).
Proof.
red; intros (x, (y, Hy)).
elim (Hy 0); elim (Hy 1); elim (Hy 2); intros;
 match goal with
 | [_:(?a = ?b),_:(?a = ?c) |- _ ] =>
     cut (b = c); [ discriminate | transitivity a; auto ]
 end.
Qed.
\end{coq_example*}
%\end{centerframe}
%\caption{A proof on cardinality of natural numbers}
%\label{cnatltac}
%\end{figure}

We can notice that all the (very similar) cases coming from the three
eliminations (with three distinct natural numbers) are successfully solved by
a {\tt match goal} structure and, in particular, with only one pattern (use
of non-linear matching).

\subsection{Permutation on closed lists}

Another more complex example is the problem of permutation on closed lists. The
aim is to show that a closed list is a permutation of another one.

First, we define the permutation predicate as shown in table~\ref{permutpred}.

\begin{figure}
\begin{centerframe}
\begin{coq_example*}
Section Sort.
Variable A : Set.
Inductive permut : list A -> list A -> Prop :=
  | permut_refl   : forall l, permut l l
  | permut_cons   :
      forall a l0 l1, permut l0 l1 -> permut (a :: l0) (a :: l1)
  | permut_append : forall a l, permut (a :: l) (l ++ a :: nil)
  | permut_trans  :
      forall l0 l1 l2, permut l0 l1 -> permut l1 l2 -> permut l0 l2.
End Sort.
\end{coq_example*}
\end{centerframe}
\caption{Definition of the permutation predicate}
\label{permutpred}
\end{figure}

A more complex example is the problem of permutation on closed lists.
The aim is to show that a closed list is a permutation of another one.
First, we define the permutation predicate as shown on
Figure~\ref{permutpred}.

\begin{figure}
\begin{centerframe}
\begin{coq_example}
Ltac Permut n :=
  match goal with
  | |- (permut _ ?l ?l) => apply permut_refl
  | |- (permut _ (?a :: ?l1) (?a :: ?l2)) =>
      let newn := eval compute in (length l1) in
      (apply permut_cons; Permut newn)
  | |- (permut ?A (?a :: ?l1) ?l2) =>
      match eval compute in n with
      | 1 => fail
      | _ =>
          let l1' := constr:(l1 ++ a :: nil) in
          (apply (permut_trans A (a :: l1) l1' l2);
            [ apply permut_append | compute; Permut (pred n) ])
      end
  end.
Ltac PermutProve :=
  match goal with
  | |- (permut _ ?l1 ?l2) =>
      match eval compute in (length l1 = length l2) with
      | (?n = ?n) => Permut n
      end
  end.
\end{coq_example}
\end{centerframe}
\caption{Permutation tactic}
\label{permutltac}
\end{figure}

Next, we can write naturally the tactic and the result can be seen on
Figure~\ref{permutltac}. We can notice that we use two toplevel
definitions {\tt PermutProve} and {\tt Permut}. The function to be
called is {\tt PermutProve} which computes the lengths of the two
lists and calls {\tt Permut} with the length if the two lists have the
same length. {\tt Permut} works as expected.  If the two lists are
equal, it concludes. Otherwise, if the lists have identical first
elements, it applies {\tt Permut} on the tail of the lists.  Finally,
if the lists have different first elements, it puts the first element
of one of the lists (here the second one which appears in the {\tt
  permut} predicate) at the end if that is possible, i.e., if the new
first element has been at this place previously. To verify that all
rotations have been done for a list, we use the length of the list as
an argument for {\tt Permut} and this length is decremented for each
rotation down to, but not including, 1 because for a list of length
$n$, we can make exactly $n-1$ rotations to generate at most $n$
distinct lists. Here, it must be noticed that we use the natural
numbers of {\Coq} for the rotation counter. On Figure~\ref{ltac}, we
can see that it is possible to use usual natural numbers but they are
only used as arguments for primitive tactics and they cannot be
handled, in particular, we cannot make computations with them. So, a
natural choice is to use {\Coq} data structures so that {\Coq} makes
the computations (reductions) by {\tt eval compute in} and we can get
the terms back by {\tt match}.
 
With {\tt PermutProve}, we can now prove lemmas as 
% shown on Figure~\ref{permutlem}.
follows:
%\begin{figure}
%\begin{centerframe}

\begin{coq_example*}
Lemma permut_ex1 :
  permut nat (1 :: 2 :: 3 :: nil) (3 :: 2 :: 1 :: nil).
Proof. PermutProve. Qed.
Lemma permut_ex2 :
  permut nat
    (0 :: 1 :: 2 :: 3 :: 4 :: 5 :: 6 :: 7 :: 8 :: 9 :: nil)
    (0 :: 2 :: 4 :: 6 :: 8 :: 9 :: 7 :: 5 :: 3 :: 1 :: nil).
Proof. PermutProve. Qed.
\end{coq_example*}
%\end{centerframe}
%\caption{Examples of {\tt PermutProve} use}
%\label{permutlem}
%\end{figure}


\subsection{Deciding intuitionistic propositional logic}

\begin{figure}[b]
\begin{centerframe}
\begin{coq_example}
Ltac Axioms :=
  match goal with
  | |- True => trivial
  | _:False |- _  => elimtype False; assumption
  | _:?A |- ?A  => auto
  end.
\end{coq_example}
\end{centerframe}
\caption{Deciding intuitionistic propositions (1)}
\label{tautoltaca}
\end{figure}


\begin{figure}
\begin{centerframe}
\begin{coq_example}
Ltac DSimplif :=
  repeat
   (intros;
    match goal with
     | id:(~ _) |- _ => red in id
     | id:(_ /\ _) |- _ =>
         elim id; do 2 intro; clear id
     | id:(_ \/ _) |- _ =>
         elim id; intro; clear id
     | id:(?A /\ ?B -> ?C) |- _ =>
         cut (A -> B -> C);
          [ intro | intros; apply id; split; assumption ]
     | id:(?A \/ ?B -> ?C) |- _ =>
         cut (B -> C);
          [ cut (A -> C);
             [ intros; clear id
             | intro; apply id; left; assumption ]
          | intro; apply id; right; assumption ]
     | id0:(?A -> ?B),id1:?A |- _ =>
         cut B; [ intro; clear id0 | apply id0; assumption ]
     | |- (_ /\ _) => split
     | |- (~ _) => red
     end).
Ltac TautoProp :=
  DSimplif;
   Axioms ||
     match goal with
     | id:((?A -> ?B) -> ?C) |- _ =>
          cut (B -> C);
          [ intro; cut (A -> B);
             [ intro; cut C;
                [ intro; clear id | apply id; assumption ]
             | clear id ]
          | intro; apply id; intro; assumption ]; TautoProp
     | id:(~ ?A -> ?B) |- _ =>
         cut (False -> B);
          [ intro; cut (A -> False);
             [ intro; cut B;
                [ intro; clear id | apply id; assumption ]
             | clear id ]
          | intro; apply id; red; intro; assumption ]; TautoProp
     | |- (_ \/ _) => (left; TautoProp) || (right; TautoProp)
     end.
\end{coq_example}
\end{centerframe}
\caption{Deciding intuitionistic propositions (2)}
\label{tautoltacb}
\end{figure}

The pattern matching on goals allows a complete and so a powerful
backtracking when returning tactic values. An interesting application
is the problem of deciding intuitionistic propositional logic.
Considering the contraction-free sequent calculi {\tt LJT*} of
Roy~Dyckhoff (\cite{Dyc92}), it is quite natural to code such a tactic
using the tactic language as shown on Figures~\ref{tautoltaca}
and~\ref{tautoltacb}. The tactic {\tt Axioms} tries to conclude using
usual axioms. The tactic {\tt DSimplif} applies all the reversible
rules of Dyckhoff's system. Finally, the tactic {\tt TautoProp} (the
main tactic to be called) simplifies with {\tt DSimplif}, tries to
conclude with {\tt Axioms} and tries several paths using the
backtracking rules (one of the four Dyckhoff's rules for the left
implication to get rid of the contraction and the right or).

For example, with {\tt TautoProp}, we can prove tautologies like
 those:
% on Figure~\ref{tautolem}.
%\begin{figure}[tbp]
%\begin{centerframe}
\begin{coq_example*}
Lemma tauto_ex1 : forall A B:Prop, A /\ B -> A \/ B.
Proof. TautoProp. Qed.
Lemma tauto_ex2 :
   forall A B:Prop, (~ ~ B -> B) -> (A -> B) -> ~ ~ A -> B.
Proof. TautoProp. Qed.
\end{coq_example*}
%\end{centerframe}
%\caption{Proofs of tautologies with {\tt TautoProp}}
%\label{tautolem}
%\end{figure}

\subsection{Deciding type isomorphisms}

A more tricky problem is to decide equalities between types and modulo
isomorphisms. Here, we choose to use the isomorphisms of the simply typed
$\lb{}$-calculus with Cartesian product and $unit$ type (see, for example,
\cite{RC95}). The axioms of this $\lb{}$-calculus are given by
table~\ref{isosax}.

\begin{figure}
\begin{centerframe}
\begin{coq_eval}
Reset Initial.
\end{coq_eval}
\begin{coq_example*}
Open Scope type_scope.
Section Iso_axioms.
Variables A B C : Set.
Axiom Com : A * B = B * A.
Axiom Ass : A * (B * C) = A * B * C.
Axiom Cur : (A * B -> C) = (A -> B -> C).
Axiom Dis : (A -> B * C) = (A -> B) * (A -> C).
Axiom P_unit : A * unit = A.
Axiom AR_unit : (A -> unit) = unit.
Axiom AL_unit : (unit -> A) = A.
Lemma Cons : B = C -> A * B = A * C.
Proof.
intro Heq; rewrite Heq; reflexivity.
Qed.
End Iso_axioms.
\end{coq_example*}
\end{centerframe}
\caption{Type isomorphism axioms}
\label{isosax}
\end{figure}

A more tricky problem is to decide equalities between types and modulo
isomorphisms. Here, we choose to use the isomorphisms of the simply typed
$\lb{}$-calculus with Cartesian product and $unit$ type (see, for example,
\cite{RC95}). The axioms of this $\lb{}$-calculus are given on
Figure~\ref{isosax}.

\begin{figure}[ht]
\begin{centerframe}
\begin{coq_example}
Ltac DSimplif trm :=
  match trm with
  | (?A * ?B * ?C) =>
      rewrite <- (Ass A B C); try MainSimplif
  | (?A * ?B -> ?C) =>
      rewrite (Cur A B C); try MainSimplif
  | (?A -> ?B * ?C) =>
      rewrite (Dis A B C); try MainSimplif
  | (?A * unit) =>
      rewrite (P_unit A); try MainSimplif
  | (unit * ?B) =>
      rewrite (Com unit B); try MainSimplif
  | (?A -> unit) =>
      rewrite (AR_unit A); try MainSimplif
  | (unit -> ?B) =>
      rewrite (AL_unit B); try MainSimplif
  | (?A * ?B) =>
      (DSimplif A; try MainSimplif) || (DSimplif B; try MainSimplif)
  | (?A -> ?B) =>
      (DSimplif A; try MainSimplif) || (DSimplif B; try MainSimplif)
  end
 with MainSimplif :=
  match goal with
  | |- (?A = ?B) => try DSimplif A; try DSimplif B
  end.
Ltac Length trm :=
  match trm with
  | (_ * ?B) => let succ := Length B in constr:(S succ)
  | _ => constr:1
  end.
Ltac assoc := repeat rewrite <- Ass.
\end{coq_example}
\end{centerframe}
\caption{Type isomorphism tactic (1)}
\label{isosltac1}
\end{figure}

\begin{figure}[ht]
\begin{centerframe}
\begin{coq_example}
Ltac DoCompare n :=
  match goal with
  | [ |- (?A = ?A) ] => reflexivity
  | [ |- (?A * ?B = ?A * ?C) ] =>
      apply Cons; let newn := Length B in
                  DoCompare newn
  | [ |- (?A * ?B = ?C) ] =>
      match eval compute in n with
      | 1 => fail
      | _ =>
          pattern (A * B) at 1; rewrite Com; assoc; DoCompare (pred n)
      end
  end.
Ltac CompareStruct :=
  match goal with
  | [ |- (?A = ?B) ] =>
      let l1 := Length A
      with l2 := Length B in
      match eval compute in (l1 = l2) with
      | (?n = ?n) => DoCompare n
      end
  end.
Ltac IsoProve := MainSimplif; CompareStruct.
\end{coq_example}
\end{centerframe}
\caption{Type isomorphism tactic (2)}
\label{isosltac2}
\end{figure}

The tactic to judge equalities modulo this axiomatization can be written as
shown on Figures~\ref{isosltac1} and~\ref{isosltac2}. The algorithm is quite
simple. Types are reduced using axioms that can be oriented (this done by {\tt
MainSimplif}). The normal forms are sequences of Cartesian
products without Cartesian product in the left component. These normal forms
are then compared modulo permutation of the components (this is done by {\tt
CompareStruct}). The main tactic to be called and realizing this algorithm is
{\tt IsoProve}.

% Figure~\ref{isoslem} gives 
Here are examples of what can be solved by {\tt IsoProve}.
%\begin{figure}[ht]
%\begin{centerframe}
\begin{coq_example*}
Lemma isos_ex1 : 
  forall A B:Set, A * unit * B = B * (unit * A).
Proof.
intros; IsoProve.
Qed.

Lemma isos_ex2 :
  forall A B C:Set,
    (A * unit -> B * (C * unit)) =
    (A * unit -> (C -> unit) * C) * (unit -> A -> B).
Proof.
intros; IsoProve.
Qed.
\end{coq_example*}
%\end{centerframe}
%\caption{Type equalities solved by {\tt IsoProve}}
%\label{isoslem}
%\end{figure}

%%% Local Variables: 
%%% mode: latex
%%% TeX-master: "Reference-Manual"
%%% End: