This review of On the fine-structure of regular algebra by Simon Foster and Georg Struth (Journal of Automated Reasoning 54 (2): 165-197, 2015) was first published in Computing Reviews.
Although it is dangerous to generalize from a single example, it’s instructive to look at Foster and Struth’s paper as giving us a snapshot of the state of the art in automating ordinary reasoning. First, let’s look at the particular subject of the paper.
Regular expressions and finite automata are fundamental to the practice of computer science, but they are a fertile topic for research, too. A key question is identity: when do two regular expressions (or two finite automata) behave identically? The answer is “when they describe the same set of strings,” but that’s less useful computationally than it first looks, since it requires that we test sets, which potentially are infinite, for equality. So, how else to judge identity? By setting out a collection of laws, or axioms, that encapsulate identity. Foster and Struth’s paper sets out the various attempts by a number of authors, including Boffa, Conway, Kozen, and Salomaa, to provide axioms that are sound (don’t prove any false negatives) and complete (can prove all identities).
The “fine structure” of the title is a detailed study of the interrelationships of these sets of axioms: showing when one set implies another (a process of deduction) and when one set is strictly stronger than another, which is shown by giving a counterexample that meets one set and not the other. The process of both deduction and finding counterexamples is computer assisted, and this provides the main contribution of the paper.
How does the automation directly contribute to the mathematics? It means that a number of loose ends could be tidied up: one set of axiomatizations was simplified, some missing axioms were found in the statement of a particular axiom set, and a gap was found in an existing, well-known proof. So it increased assurance in the work, but probably does not constitute a substantial advance in itself.
On the other hand, the work is indicative of real progress in the facilities available to support online proof and counterexample development. Advances have come in a number of ways. Most important is the way in which different approaches have come together: it used to be that the proof was either done manually, with a proof assistant, or in an entirely automated way, using some kind of decision procedure. The work under review uses the Isabelle proof assistant [1], which now supports the Sledgehammer tool [2], to apply a number of external automated theorem provers for first-order logic; in cases where this fails, Isabelle has its own simplification and proving systems. This has the effect of allowing users to concentrate their attention on shaping the higher-level architecture of the theories, rather than having to deduce the results proof step by (painful) proof step. Where this reaches its limit is when proofs go beyond the purely algebraic and contemplate rather more mathematical structure (for example, sets) as provided by a higher-order logic.
Although most mathematicians don’t use theorem-proving technology in their day-to-day practice, as opposed to the wide take-up of computer algebra systems in various areas of mathematics and engineering, it has taken real hold in the area of mechanized metatheory of programming languages, as witnessed by the POPLmark challenge (http://www.seas.upenn.edu/~plclub/poplmark/). It may well be because this is an area where the scale of proofs can be such that the usual social process of proof checking has been shown to be less than ideal, that value that accrues from formalization. In conclusion, it is interesting to speculate on how long it will take for the practicing mathematician to reach for a theorem-proving system as a part of his or her daily work.
[1] Isabelle home page. https://www.cl.cam.ac.uk/research/hvg/Isabelle/index.html (05/13/2015).
[2] Blanchette, J. C.; Böhme, S.; Paulson, L. C. Extending Sledgehammer with SMT solvers. In Proc. of the 23rd Int. Conf. on Automated Deduction: CADE-23. Springer,
Although it is dangerous to generalize from a single example, it’s instructive to look at Foster and Struth’s paper as giving us a snapshot of the state of the art in automating ordinary reasoning. First, let’s look at the particular subject of the paper.
Regular expressions and finite automata are fundamental to the practice of computer science, but they are a fertile topic for research, too. A key question is identity: when do two regular expressions (or two finite automata) behave identically? The answer is “when they describe the same set of strings,” but that’s less useful computationally than it first looks, since it requires that we test sets, which potentially are infinite, for equality. So, how else to judge identity? By setting out a collection of laws, or axioms, that encapsulate identity. Foster and Struth’s paper sets out the various attempts by a number of authors, including Boffa, Conway, Kozen, and Salomaa, to provide axioms that are sound (don’t prove any false negatives) and complete (can prove all identities).
The “fine structure” of the title is a detailed study of the interrelationships of these sets of axioms: showing when one set implies another (a process of deduction) and when one set is strictly stronger than another, which is shown by giving a counterexample that meets one set and not the other. The process of both deduction and finding counterexamples is computer assisted, and this provides the main contribution of the paper.
How does the automation directly contribute to the mathematics? It means that a number of loose ends could be tidied up: one set of axiomatizations was simplified, some missing axioms were found in the statement of a particular axiom set, and a gap was found in an existing, well-known proof. So it increased assurance in the work, but probably does not constitute a substantial advance in itself.
On the other hand, the work is indicative of real progress in the facilities available to support online proof and counterexample development. Advances have come in a number of ways. Most important is the way in which different approaches have come together: it used to be that the proof was either done manually, with a proof assistant, or in an entirely automated way, using some kind of decision procedure. The work under review uses the Isabelle proof assistant [1], which now supports the Sledgehammer tool [2], to apply a number of external automated theorem provers for first-order logic; in cases where this fails, Isabelle has its own simplification and proving systems. This has the effect of allowing users to concentrate their attention on shaping the higher-level architecture of the theories, rather than having to deduce the results proof step by (painful) proof step. Where this reaches its limit is when proofs go beyond the purely algebraic and contemplate rather more mathematical structure (for example, sets) as provided by a higher-order logic.
Although most mathematicians don’t use theorem-proving technology in their day-to-day practice, as opposed to the wide take-up of computer algebra systems in various areas of mathematics and engineering, it has taken real hold in the area of mechanized metatheory of programming languages, as witnessed by the POPLmark challenge (http://www.seas.upenn.edu/~plclub/poplmark/). It may well be because this is an area where the scale of proofs can be such that the usual social process of proof checking has been shown to be less than ideal, that value that accrues from formalization. In conclusion, it is interesting to speculate on how long it will take for the practicing mathematician to reach for a theorem-proving system as a part of his or her daily work.
[1] Isabelle home page. https://www.cl.cam.ac.uk/research/hvg/Isabelle/index.html (05/13/2015).
[2] Blanchette, J. C.; Böhme, S.; Paulson, L. C. Extending Sledgehammer with SMT solvers. In Proc. of the 23rd Int. Conf. on Automated Deduction: CADE-23. Springer,
No comments:
Post a Comment