Multiple levels of classification naturally occur in many domains. Several multi-level modeling approaches account for this, and a subset of them attempt to provide their users with sanity-checking mechanisms in order to guard them against conceptually ill-formed models. Historically, the respective multi-level well-formedness schemes have either been overly restrictive or too lax. Orthogonal Ontological Classification has been proposed as a foundation for sound multi-level modeling that combines the selectivity of strict schemes with the flexibility afforded by laxer schemes. In this article, we present the second iteration of a formalization of Orthogonal Ontological Classification, which we empirically validated to demonstrate some of its hitherto only postulated claims using an implementation in ConceptBase. We discuss the expressiveness of the formal language used, ConceptBase’s evaluation efficiency, and the usability of our realization based on a digital twin example model.
CC BY 4.0
Corresponding author: E-mail address: tk@ecs.vuw.ac.nz (T. Kühne)
This work was in part supported by the Swedish Knowledge Foundation (KKS) through its VF-KDO Profile research project, grant number 20180011. We are grateful to the anonymous reviewers whose in-depth feedback led to considerable improvements.