Ontologies are representations of information designed to enable logical reasoning over them. In many cases, there is a need to transform data into ontological representation from other forms of more unstructured data (for example, natural language), or a need for using ontologies representing intersecting parts of reality in conjunction. These and other situations provide automatically generated ontologies. In the process of generating these ontologies, faults may be introduced in them. Moreover, ontologies created manually by humans may also contain faults because of the inexperience of the ontology designer, because of multiple designers working on the same ontology or for many other possible reasons.
The most general classification of ontological faults divides them in inconsistencies and incompletenesses. Simply put, an inconsistency implies that opposite statements are included in the ontology. An alternative, more practical definition is that an inconsistency is the including of a false statement in an ontology. An incompleteness implies that a statement is neither true nor false as a consequence of the ontology. Similarly, an alternative, more practical definition is that an incompleteness happens when a true statement is not a consequence of an ontology.
Within this project we are worried about unfolding this classification of faults into more expressive ones, the development of methods for automatically (or interactively) detecting faults, and for repairing them.
This project is mainly performed by Juan Casanova, a CDT in Data Science student under the supervision of Alan Bundy and Perdita Stevens. It is also partially funded by BrainnWave, a company with great interest in the area of fault detection and reparation.