Candidate
Vajihollah Montaghami
Title
Debugging Relational Declarative Models with Discriminating Examples
Supervisor
Derek Rayside
Abstract
Models, especially those with mathematical or logical foundations, have proven valuable to engineering practice in a wide range of disciplines, including software engineering. Models enable the engineer to focus on the essential abstractions, while eliding less important details. Like any human-created artifact, a model might have imperfections at certain stages of the design process: it might have internal inconsistencies, or it might not properly express the engineer’s design intentions. Automated analysis tools can assist the engineer in ensuring that the model is a consistent and accurate expression of their design intentions.
Example generators are a category of tools that have proven useful to software engineers: concrete examples illustrate the model’s abstractions. Nevertheless, some examples might be more interesting or useful to the software engineer than others. Previous research has produced tools that generate examples that are arbitrary, minimal, or targeted.
This dissertation describes tools and techniques for generating discriminating examples: examples that test hypotheses articulating how the expressed model might differ from the intended model. By accepting or rejecting the example, the engineer affirms or refutes the hypothesis. Examples that reveal a limitation of the expressed model are more useful, and discriminating examples are intended to fulfill this function. One general hypothesis is that mistakes happen near the boundaries. We have developed a technique to generate pairs of examples that highlight the boundary of the written model: one example is inside and the other outside, and they are at a minimal distance to each other. By comparing these examples with each other and with the model, the engineer can confirm that the boundary of the written model is where it is intended to be.
More specific hypotheses can be formed by analyzing the model. We have developed a technique for inferring the semantics of the model based on a library of modeling patterns that we have compiled. With this foundation, our debugger can explore strengthening or weakening particular logical properties of the model. For example, a total order might be weakened to a partial order. These more specific hypotheses are also explored near the boundary of the written model, using the previous technique. To support these example generation techniques and the engineer’s use of them, we additionally propose two syntactic extensions to the Alloy modeling language: one for expressing examples, and one for expressing generator axioms. The latter requires some model transformations and computations that we also develop.
Generating discriminating examples is computationally demanding. This dissertation shows how to make it feasible. While the dissertation provides evidence for the utility of discriminating examples via an important historical case study, important future work involves validation through user studies with human subjects.