Concerns for parking are becoming imminent to best support the urban core. These persistent parking problems could be turned into new opportunities, brought by current trends in meeting the globally connected continuum. This paper reveals a work-in- progress to capitalize on private land properties for parking, in order to relieve stress on public agencies, create new sources of revenue, and enlist new entities in the intermediary market. These intermediaries, labelled as Parking Service Providers (or PSPs) play a broker role through advertising parking lots on a shared cloud platform. To streamline these business collaborations and related processes, physical parking lots are augmented with Internet connectivity allowing cloud-provided applications to congregate these lots into a larger inventory. The Internet of Things (IoT) paradigm expands the scope of cloud-based intelligent car parking services in smart cities, with novel applications that better regulate car-parking related traffic. This paper presents a work-in-progress agenda that contributes to new business solutions and state-of-the-art research impacts. We reveal a multi-layered system of PSP-business model through interdisciplinary research blocks where original results are expected to be made at each layer.
Critical infrastructures (CIs) are becoming increasingly sophisticated with embedded cyber-physical systems (CPSs) that provide managerial automation and autonomic controls. Yet these advances expose CI components to new cyber-threats, leading to a chain of dysfunctionalities with catastrophic socio-economical implications. We propose a comprehensive architectural model to support the development of incident management tools that provide situation-awareness and cyber-threats intelligence for CI protection, with a special focus on smart-grid CI. The goal is to unleash forensic data from CPS-based CIs to perform some predictive analytics. In doing so, we use some AI (Artificial Intelligence) paradigms for both data collection, threat detection, and cascade-effects prediction.
Smart grid employs ICT infrastructure and network connectivity to optimize efficiency and deliver new functionalities. This evolu- tion is associated with an increased risk for cybersecurity threats that may hamper smart grid operations. Power utility providers need tools for assessing risk of prevailing cyberthreats over ICT infrastructures. The need for frameworks to guide the develop- ment of these tools is essential to define and reveal vulnerability analysis indicators. We propose a data-driven approach for design- ing testbeds to evaluate the vulnerability of cyberphysical systems against cyberthreats. The proposed framework uses data reported from multiple components of cyberphysical system architecture layers, including physical, control, and cyber layers. At the phys- ical layer, we consider component inventory and related physi- cal flows. At the control level, we consider control data, such as SCADA data flows in industrial and critical infrastructure control systems. Finally, at the cyber layer level, we consider existing secu- rity and monitoring data from cyber-incident event management tools, which are increasingly embedded into the control fabrics of cyberphysical systems.
This document reports a technical description of ELVIRA project results obtained as part of Work- package 4.1 entitled “Multi-agent systems for power Grid monitoring”. ELVIRA project is a collaboration between researchers in School of IT at University of Skövde and Combitech Technical Consulting Company in Sweden, with the aim to design, develop and test a testbed simulator for critical infrastructures cybersecurity. This report outlines intelligent approaches that continuously analyze data flows generated by Supervisory Control And Data Acquisition (SCADA) systems, which monitor contemporary power grid infrastructures. However, cybersecurity threats and security mechanisms cannot be analyzed and tested on actual systems, and thus testbed simulators are necessary to assess vulnerabilities and evaluate the infrastructure resilience against cyberattacks. This report suggests an agent-based model to simulate SCADA- like cyber-components behaviour when facing cyber-infection in order to experiment and test intelligent mitigation mechanisms.
The competitiveness and efficiency of an enterprise is dependent on its ability to interact with other enterprises and organisations. In this context interoperability is defined as the ability of business processes as well as enterprise software and applications to interact. Interoperability remains a problem and there are numerous issues to be resolved in different situations. We propose method engineering as an approach to organise interoperability knowledge in a method chunk repository. In order to organise the knowledge repository we need an interoperability classification framework associated to it. In this paper we propose a generic architecture for a method chunk repository, elaborate on a classification framework and associate it to some existing bodies of knowledge. We also show how the proposed framework can be applied in a working example.
Cyber-Physical Systems (CPSs) are augmenting traditionalCritical Infrastructures (CIs) with data-rich operations. Thisintegration creates complex interdependencies that exposeCIs and their components to new threats. A systematicapproach to threat modeling is necessary to assess CIs’ vulnerabilityto cyber, physical, or social attacks. We suggest anew threat modeling approach to systematically synthesizeknowledge about the safety management of complex CIs andsituational awareness that helps understanding the nature ofa threat and its potential cascading-effects implications.
This volume constitutes the proceedings of the 9th IFIP WG 8.1 Conference on the Practice of Enterprise Modeling held in November 2016 in Skövde, Sweden. The PoEM conference series started in 2008 and aims to provide a forum sharing knowledge and experiences between the academic community and practitioners from industry and the public sector. The 18 full papers and 9 short papers accepted were carefully reviewed and selected from 54 submissions and cover topics related to information systems development, enterprise modeling, requirements engineering, and process management. In addition, the keynote by Robert Winter on “Establishing 'Architectural Thinking' in Organizations” is also included in this volume.
We conducted the interview iteratively via email correspondence over the summer of 2017. Anne had been the general chair of PoEM 2017 in Skövde 2016 and, given her history with PoEM, we thus were very keen to learn about her views on enterprise modeling.
Enterprise Modeling (EM) addresses business-IT alignment in a holistic manner by providing the techniques, languages, tools and best practices for using models to represent organizational knowledge and information systems from different perspectives. Complex business and technology conditions means that EM plays an important role in reaching such alignment. Quality attributes such as agility, sensitivity, responsiveness, adaptability, autonomy and interoperability are emerging as the norms for advanced enterprise models. Achieving these qualities will allow all components of an enterprise to operate together in a cooperative manner for the purpose of maximizing overall benefit to the enterprise.
Reuse of virtual engineering models and simulations improves engineering efficiency. Reuse requires preserving the information provenance. This paper suggests a framework based on the 7W data provenance model to be part of simulation data management implemented in product lifecycle management systems. The resulting provenance framework is based on a case study in which a product was re-engineered using finite element analysis.
Intelligent integration of information continues to challenge database research for over 35 years. While data integration processes of all kinds are now reasonably well understood and widely used in practice, the growth and heterogeneity of data requires much higher degrees of automation to limit the need for human specialist work. This requires deeper insights in data-centric approaches of Enterprise Information Integration which focus on the semantics of information integration. Recent formalizations and algorithms enable both significant improvement in schema integration, and in its automated transformation to efficient data-level integration, in a wide variety of architectural settings such as data warehouses or peer-to-peer databases. In addition to giving a short overview of developments in this field for the past 20 years, this paper focuses particularly on the challenges posed by heterogeneity in data models.
The CAiSE 98 paper “Architecture and Quality in Data Warehouses” and its ex-panded journal version (Jarke et al. 1999) was the first to add a Zachman-like (Zachman 1987) explicit conceptual enterprise modeling perspective to the archi-tecture of data warehouses. Until then, data warehouses were just seen as collec-tions of – typically multidimensional and historized – materialized views on rela-tional tables, without consideration of modeling of the (business) conceptsunderlying their structure. The paper pointed out that this additional conceptualperspective was not just necessary for a truly semantic data integration but also aprerequisite for bringing the then very active data warehouse movement togetherwith another topic of quickly growing importance, that of data quality.
DeepTelos is defined as a set of rules and constraints that enable multi-level modeling for the Telos metamodeling language. In its ConceptBase implementation, rules and constraints are realized by Datalog clauses. We start with demonstrating first the core functions of Telos, use of simple rules and constraints,then the meta-level rules and constraints defining DeepTelos. A couple of examples show how the DeepTelos rules and constraints are compiled to simple rules and constraints and then realize the desired multi-level modeling environment. The main example is taken from the Galileo satellite domain.
Classical data models such as the entity-relationship diagrams distinguish between objects(=entities) and values. An object is referenced by its immutable identifier that is assigned to the object when it is created. Its state is established by a combination of mutable values taken from so-called domains. Domains are sets of values that come with their own algebraic semantics, such as integer numbers or strings. So, values have no identity and do not change. Objects are identified and their state changes. In this paper, we challenge this strict dichotomy by introducing so-called attribute objects.
DeepTelos is a straightforward extension of the Telosmodeling language to allow some form of multi-level modeling. A variant of Telos has been implemented in the ConceptBase system on top of a Datalog engine. Telos defines the concepts of instantiation, specialization and attribution/relations by means of axioms. In addition, the user can define new constructs by deductive rules, integrity constraints, and so-called query classes. In this paper, we tackle the process challenge formulated for the MULTI 2019 workshop to see to which extent DeepTelos is ableto represent the requirements of this challenge.
The process modeling challenge provides an opportunity to compare various approaches to multi-level conceptual modeling. In particular, the challenge requests the definition of constructs for designing process models plus the facilities to create process models with these constructs, and to analyze the execution of such processes, all in one multi-level model. In this paper, we evaluate the performance of DeepTelos in solving the challenge. DeepTelos is an extension of the Telos modeling language that adds a small number of rules and constraints to the Telos axioms in order to facilitate multi-level modeling by means of so-called most-general instances, a variant of the powertype pattern. We present the technology behind DeepTelos and address the individual tasks of the process modeling challenge. A critical review discusses strengths and weaknesses exposed by the solution to the challenge.
This chapter provides a practical guide on how to use the meta datarepository ConceptBase to design information modeling methods by using meta-modeling. After motivating the abstraction principles behind meta-modeling, thelanguage Telos as realized in ConceptBase is presented. First, a standard factualrepresentation of statements at any IRDS abstraction level is defined. Then, thefoundation of Telos as a logical theory is elaborated yielding simple fixpointsemantics. The principles for object naming, instantiation, attribution, andspecialization are reflected by roughly 30 logical axioms. After the languageaxiomatization, user-defined rules, constraints and queries are introduced. The firstpart is concluded by a description of active rules that allows the specification ofreactions of ConceptBase to external events. The second part applies the languagefeatures of the first part to a full-fledged information modeling method: The Yourdanmethod for Modern Structured Analysis. The notations of the Yourdan method aredesigned along the IRDS framework. Intra-notational and inter-notational constraintsare mapped to queries. The development life cycle is encoded as a software processmodel closely related to the modeling notations. Finally, aspects managing themodeling activities are addressed by metric definitions.
Multilevel modeling aims at improving the expressiveness and conciseness of conceptual modeling languages by allowing to express domain knowledge at higher abstractions levels. In this demonstration,we go thru two variants of multilevel extensions for the ConceptBase system, which had originally been used more for the design of domain-specific conceptual modeling languages. The demonstration highlights the partial evaluation feature of the deductive rule engine of ConceptBase. It also shows how multilevel modeling is essentially about a better understanding how instantiation, specialization, and attribution relate to each other in conceptual modeling.
The Erasmus+ OMI initiative aims to facilitate academic education in enterprise modeling by create a shared and open modeling environment, in which various domain-specific modeling languages (DSML) are being integrated. The DSML form modeling perspectives, e.g. for data modeling, process modeling, goal modeling, and so forth.This document describes a tool that is designed to support the open modeling environmentt by providing semantic integrity checking services (SemCheck [1]), which can be used to analyze interrelated models formulated to find errors and to extract information from the models.
Enterprises are complex and dynamic organizations that can hardly be understood from a single viewpoint. Enterprise modeling tackles this problem by providing multiple, specialized modeling languages, each designed for representing information about the enterprise from a given viewpoint. The OMiLAB initiative promotes the use of metamodeling to design such domain-specific languages and to provide them by an open repository to the community. In this chapter, we discuss how this metamodeling approach can be combined with the design of integrity constraints that span multiple modeling languages. We propose the services of the ConceptBase system as a constraint checker for modeling languages created by the ADOxx platform.
In the last two decades, about a dozen proposals were made to extend object-oriented modeling by multiple abstraction levels. One group of proposals designates explicit levels to objects and classes. The second group uses the powertype pattern to implicitly establish levels. From this group, we consider two proposals, DeepTelos and MLT*. Both have been defined via axioms and both give a central role to the powertype pattern. In this paper, we reconstruct MLT* with the deductive axiomatization style used for DeepTelos. The resulting specification is executed in a deductive database to check MLT* multi-level models for errors and complete them with derived facts that do not have to be explicitly asserted by modelers. This leverages the rich rules of MLT* with the deductive approach underlying DeepTelos. The effort also allows us to clearly establish the relation between DeepTelos and MLT*, in an attempt to clarify the relations between approaches in this research domain. As a byproduct, we supply MLT-Telos as a fully operational deductive implementation of MLT* to the research community.
This paper proposes the structure of a so-called method chunk repository that contains instructions on how to solve interoperability problems between organizations and their information systems. We detail how interoperability problems and their solutions should be tagged in order to match them. The combination of such tagged interoperability problem classifiers forms the language to express meaningful statements about the situation in which certain method chunks are applicable to solve an observed problem.
Multi-level modeling (MLM) represents a significant extension to the traditional two-level object-oriented paradigm with the potential to dramatically improve upon the utility, reliability and complexity of models. Different from conventional approaches, theyallow for an arbitrary number of classification levels and introduce other concepts that foster expressiveness, reuse and adaptability. A key aspect of the MLM paradigm is the use of entities that are simultaneously types and instances, a feature which has consequences for conceptual modeling, language engineering and for the development of model-based software systems.
Multi-level modeling (MLM) as part of object-oriented modeling aims at fully utilizing the expressive power ofmultiple abstraction levels. While these levels where initially used to define domain-specific modeling languages, i.e. for linguistic purposes, the MLM community has long argued that there ismuch more to gain by tapping into ontological abstraction levels. While MLM is a rather specialized research field, there are now quite a number of different proposals. There is thus anopportunity to develop a uniform core of MLM that then possibly can become part of a standard and be taken up by the larger modeling community.
This document reports a technical description of ELVIRA project results obtained as part of Work-package 3.1&3.2 entitled “Taxonomy of Critical Infrastructure (Taxonomy of events + Taxonomy of CI component and relationship)”. ELVIRA project is a collaboration between researchers in School of IT at University of Skövde and Combitech Technical Consulting Company in Sweden, with the aim to design, develop and test a testbed simulator for critical infrastructures cybersecurity.
PrefaceWelcome to the proceedings of the workshops held during the 34th InternationalConference on Conceptual Modeling (ER conference 2015) in Stockholm, Sweden.Workshops offer an incremental exploration of cutting-edge research issues thatbecome prominent in the future. The ER conference has a long tradition of thought-provoking and state-of-the-art workshops. This year we had a rich combination ofseven workshops and a special symposium on conceptual modeling education.We attracted 52 paper submissions for all the workshops, of which 26 wereaccepted. These workshops also have invited papers along with papers from thesymposium on conceptual modeling education.This volume comprises contributions from the following workshops:AHA 2015–Conceptual Modeling for Ambient Assistance and Healthy AgeingCMS 2015–Conceptual Modeling of ServicesEMoV 2015–Event Modeling and Processing in Business Process ManagementMoBiD 2015–Modeling and Management of Big DataMORE-BI 2015–Modeling and Reasoning for Business IntelligenceMReBA 2015–Conceptual Modeling in Requirements Engineering and Business AnalysisQMMQ 2015–Quality of Modeling and Modeling of QualitySCME 2015–Symposium on Conceptual Modeling Education
The MULTI 2022 Collaborative Comparison Challenge was created to promote in-depth discussion between multi-level modeling approaches. This paper presents a comparison of DeepTelos- and DMLA-based solutions in response to the challenge. We first present each approach and solution separately, and then list the similarities and differences between the two solutions, discussing their relativestrengths and weaknesses.
Multi-level modeling aims to reduce redundancy in data models by defining properties at the right abstraction level and inheriting them to more specific levels. We revisit one of the earliest such approaches, Telos, and investigate what needs to be added to its axioms to get a true multi-level modeling language. Unlike previous approaches, we define levels not with numeric potencies but with hierarchies of so-called most general instances.
Key performance indicators are widely used to manage any type of processes including manufacturing, logistics, and business processes. We present an approach to map informal specifications of key performance indicators to prototypical data warehouse designs that support the calculation of the KPIs via aggregate queries. We argue that the derivation of the key performance indicators shall start from a process definition that includes scheduling and resource information.
The 30th International Conference on Conceptual Modeling (ER-2011) highlighted the strong and persistent interest in research on conceptual modeling for developing information systems. Topics included data modeling theory, goal modeling, socio-technical factors, requirements engineering, process modeling, and ontologies.
Conventional wisdom says that open-access (OA) publishing is not sustainable without a proper financial basis. Commercial OA publishers charge authors, their employers, or their roof organizations considerable fees to cover the costs and to generate income for the OA publishers. This may lead to undesired effects such as giving preference to authors who can afford the fee over those who do not have the resources.CEUR-WS.org (CEUR Workshop Proceedings) selected a different path in 1995 when it was founded as a service operated under the umbrella of Sun SITE Central Europe at RWTH Aachen University. We deliberately focused on computer science workshop proceedings, which had difficulties to find publishers at that time. More importantly, we designed the service with a team of volunteer scientists who perform the publication service in their free time and without charging any fee from authors, editors, or readers.Now, after more than 20 years, the service is still operating and growing every year. In the last year, we published more than 250 proceedings volumes, making CEUR-WS.org one of the largest open-access publication channels for computer science proceedings.In this talk, we discuss the success factors of CEUR-WS.org and some of the challenges that we encountered over the past two decades. We also give some glimpses into future extensions of the service such as open data, semantification, and improved interoperability with neighbor services such as DBLP and the indexing by national libraries.We believe that scientists should take a greater role in the publication value chain. We know best our needs for future publication models, the ethical conditions for fairness, and the rules governing scientific publication.Many thanks go to the team of Sun SITE Central Europe for hosting the service and the members of the CEUR-WS.org Team who run the service for free, for you.
This document reports a technical description of ELVIRA project results obtained as part of Work-package 2.1 entitled “Complex Dependencies Analysis”. In this technical report, we review attempts in recent researches where connections are regarded as influencing factors to IT systems monitoring critical infrastructure, based on which potential dependencies and resulting disturbances are identified and categorized. Each kind of dependence has been discussed based on our own entity based model. Among those dependencies, logical and functional connections have been analysed with more details on modelling and simulation techniques.
Power grids form the central critical infrastructure in all developed economies. Disruptions of power supply can cause major effects on the economy and the livelihood of citizens. At the same time, power grids are being targeted by sophisticated cyber attacks. To counter these threats, we propose a domain-specific language and a repository to represent power grids and related IT components that control the power grid. We apply our tool to a standard example used in the literature to assess its expressiveness.
Modern security practices promote quantitative methods to provide prioritisation insights and support predictive analysis, which is supported by open-source cybersecurity databases such as the Common Vulnerabilities and Exposures (CVE), the National Vulnerability Database (NVD), CERT, and vendor websites. These public repositories provide a way to standardise and share up-to-date vulnerability information, with the purpose to enhance cybersecurity awareness. However, data quality issues of these vulnerability repositories may lead to incorrect prioritisation and misemployment of resources. In this paper, we aim to empirically analyse the data quality impact of vulnerability repositories for actual information technology (IT) and operating technology (OT) systems, especially on data inconsistency. Our case study shows that data inconsistency may misdirect investment of cybersecurity resources. Instead, correlated vulnerability repositories and trustworthiness data verification bring substantial benefits for vulnerability management.
Critical infrastructure (CIs) such as power grids link a plethora of physical components from many different vendors to the software systems that control them. These systems are constantly threatened by sophisticated cyber attacks. The need to improve the cybersecurity of such CIs, through holistic system modeling and vulnerability analysis, cannot be overstated. This is challenging since a CI incorporates complex data from multiple interconnected physical and computation systems. Meanwhile, exploiting vulnerabilities in different information technology (IT) and operational technology (OT) systems leads to various cascading effects due to interconnections between systems. The paper investigates the use of a comprehensive taxonomy to model such interconnections and the implied dependencies within complex CIs, bridging the knowledge gap between IT security and OT security. The complexity of CI dependence analysis is harnessed by partitioning complicated dependencies into cyber and cyber-physical functional dependencies. These defined functional dependencies further support cascade modeling for vulnerability severity assessment and identification of critical components in a complex system. On top of the proposed taxonomy, the paper further suggests power-grid reference models that enhance the reproducibility and applicability of the proposed method. The methodology followed was design science research (DSR) to support the designing and validation of the proposed artifacts. More specifically, the structural, functional adequacy, compatibility, and coverage characteristics of the proposed artifacts are evaluated through a three-fold validation (two case studies and expert interviews). The first study uses two instantiated power-grid models extracted from existing architectures and frameworks like the IEC 62351 series. The second study involves a real-world municipal power grid.
Telos is a conceptual modeling language intended to capture software knowledge, such as software system requirements, domain knowledge, architectures, design decisions and more. To accomplish this, Telos was designed to be extensible in the sense that the concepts used to capture software knowledge can be defined in the language itself, instead of being built-in. This extensibility is accomplished through powerful metamodeling features, which proved very useful for interrelating heterogeneous models from requirements, model-driven software engineering, data integration, ontology engineering, cultural informatics and education. We trace the evolution of ideas and research results in the Telos project from its origins in the late eighties. Our account looks at the semantics of Telos, its various implementations and its applications. We also recount related research by other groups and the cross-influences of ideas thereof. We conclude with lessons learnt.
Traditional two-level modeling approaches distinguish between class- and object features. Using UML parlance, classes have attributes which require their instances to have object slots. Multi-Level Modeling unifies classes and objects to "clabjects", and it has been suggested that attributes and slots can and should be unified to "fields" in a similar way. The notion of deep instantiation for clabjects creates the possibility of "deep fields", i.e., fields that expand on the roles of pure attributes or pure slots. In this paper, we discuss several variants of such a "deep field" notion, pointing out the semantic differences and the various resulting trade-offs. We hope our observations will help clarify the range of options for supporting clabject fields in multi-level modeling and thus aid future MLM development.
Multiple levels of classification naturally occur in many domains. Several multi-level modeling approaches account for this and a subset of them attempt to provide their users with sanity-checking mechanisms in order to guard them against conceptually ill-formed models. Historically, the respective multi-level well-formedness schemes have either been overly restrictive or too lax. Orthogonal Ontological Classification has been proposed as a foundation that combines the selectivity of strict schemes with the flexibility afforded by laxer schemes. In this paper, we present a formalization of Orthogonal Ontological Classification, which we empirically validated to demonstrate some of its hitherto only postulated claims using an implementation in ConceptBase. We discuss both the formalization and the implementation, and report on the limitations we encountered.
The "MULTI" workshop series has set a number of multi-level modeling challenges, each designed to allow competing multi-level modeling approaches to demonstrate their capabilities and/or to tease out their limitations. The challenges therefore have been serving a three-fold purpose: First, they have allowed technologies to demonstrate their abilities. Second, they have pointed out where technologies still fall short of providing optimal modeling support. Third, they have provided a basis for comparing competing technologies, often revealing the trade-offs implied by certain design choices. The MULTI Warehouse Challenge described in this paper is the fourth installment in this series, defining a new unique set of demanding modeling challenges.
Manufacturing data and information are produced and used during the lifecycle of product development. Product lifecycle management (PLM) systems provide a suitable platform for managing them. For appropriate management of manufacturing data, it needs to be identified, classified, and stored based on the structure of PLM systems. In this paper, the results of an industrial manufacturing data collection study are interpreted, and their relation to the main structures in PLM systems is specified. Subsequently, a new information model for assigning this data and information to the PLM data model is presented. The main contribution of this information model is the definition of property and change objects and integrating them with the structure of PLM systems; changes and revisions of those data are formally defined and hence traceable.
Virtual engineering increases the rate of and diversity of models being created; hence requires maintenance in a product lifecycle management (PLM) system. This also induces the need to understand their creation contexts, known as historical or provenance information, to reuse the models in other engineering projects. PLM systems are specifically designed to manage product- and production-related data. However, they are less capable of handling the knowledge about the contexts of the models without an appropriate extension. Therefore, this research proposes an extension to PLM systems by designing a new information model to contain virtual models, their related data and knowledge generated from them through various engineering activities so that they can be effectively used to manage historical information related to all these virtual factory artifacts. Such an information model is designed to support a new Virtual Engineering ontology for capturing and representing virtual models and engineering activities, tightly integrated with an extended provenance model based on the W7 model. In addition, this paper presents how an application prototype, called Manage-Links, has been implemented with these extended PLM concepts and then used in several virtual manufacturing activities in an automotive company.
Saving and managing virtual models’ provenance information (models’ history) can increase the level of reusability of those models. This paper describes a provenance management system (PMS) that has been developed based on an industrial case study.
The product lifecycle management (PLM) system, as a main data management system, is responsible for receiving virtual models and their related data from Computer-Aided technologies (CAx) and providing this information for the PMS. In this paper, the management of discrete event simulation data with the PLM system will be demonstrated as the first link of provenance data management chain (CAx-PLM-PMS).
Shortening of the product development process time is one of the main approaches for all enterprises to offer their products to the market. Virtual manufacturing tools can help companies to reduce their time to market, by reduction of the engineering lead time. Extensive use of virtual engineering models results in a need for verification of the model’s accuracy. This virtual engineering usability and assessment have been named virtual confidence. The two main factors of the achievement of this confidence are the accuracy of the virtual models and the virtual engineering results.
For controlling of both above factors, a complete virtual model and related virtual model knowledge are needed. These knowledges can be tacit or explicit. For exploring explicit knowledge, a data and information collection from different disciplines in the organization is needed.
In this paper, a data map with focus on the manufacturing engineering scope will be presented. This data map is generated from different data sources at a manufacturing plant, and gives an overview of different data that exist at different data sources, in the area of manufacturing. Combining real world data from different sources with virtual engineering model data supports, amongst others, establishment of virtual confidence.
Application integration requires the consideration of instance data and schema data. Instance data in one application may be schema data for another application, which gives rise to multiple instantiation levels. Using deep instantiation, an object may be deeply characterized by representing schema data about objects several instantiation levels below. Deep instantiation still demands a clear separation of instantiation levels: the source and target objects of a relationship must be at the same instantiation level. This separation is inadequate in the context of application integration. Dual deep instantiation (DDI), on the other hand, allows for relationships that connect objects at different instantiation levels. The depth of the characterization may be specified separately for each end of the relationship. In this paper, we present and implement set-theoretic predicates and axioms for the representation of conceptual models with DDI.
An enterprise database contains a global, integrated, and consistent representation of a company’s data. Multi-level modeling facilitates the definition and maintenance of such an integrated conceptual data model in a dynamic environment of changing data requirements of diverse applications. Multi-level models transcend the traditional separation of class and object with clabjects as the central modeling primitive, which allows for a more flexible and natural representation of many real-world use cases. In deep instantiation, the number of instantiation levels of a clabject or property is indicated by a single potency. Dual deep modeling (DDM) differentiates between source potency and target potency of a property or association and supports the flexible instantiation and refinement of the property by statements connecting clabjects at different modeling levels. DDM comes with multiple generalization of clabjects, subsetting/specialization of properties, and multi-level cardinality constraints. Examples are presented using a UML-style notation for DDM together with UML class and object diagrams for the representation of two-level user views derived from the multi-level model. Syntax and semantics of DDM are formalized and implemented in F-Logic, supporting the modeler with integrity checks and rich query facilities.
Product lifecycle management (PLM) systems maintain amongst others the specifications and designs of product, process and resource artefacts and thus serve as the basis for realizing the concept of Virtual Manufacturing, and play a vital role in shortening the leadtimes for the engineering processes. Design of new products requires numerous experiments and test-runs of new facilities that delays the product release and causes high costs if performed in the real world. Virtualization promises to reduce these costs by simulating the reality. However, the results of the simulation must predict the real results to be useful. This is called virtual confidence. We propose a knowledge base approach to capture and maintain the virtual confidence in simulation results. To do so, the provenance of results of real, experimental and simulated processes are recorded and linked via confirmation objects.
This paper reports on the plenary panel discussion on “How to Build a Perfect Enterprise Modeling Method” held at the PoEM 2021 conference. The panel was charged with finding a pathway to the perfect enterprise modeling method. The panelists have a background in enterprise modeling and method engineering. So, the question is: Can method engineering help with designing a better enterprise modeling environment? The contributions in this paper should be regarded as a snapshot of our viewpoints at the time of the panel.
We are pleased to welcome you to the proceedings of the 41st edition of the International Conference on Conceptual Modeling (ER 2022), which took place during October 17–20, 2022. Originally, the conference was planned to take place in the beautiful city of Hyderabad, India, but due to the uncertain COVID-19 situation it was finally held virtually. The ER conference series aims to bring together researchers and practitioners building foundations of conceptual modeling and/or applying conceptual modeling in a wide range of software engineering fields. Conceptual modeling has never been more important in this age of uncertainty. As individuals, organizations, and nations face new and unexpected challenges, software and data must be developed that can cope with and help address this new uncertainty in an ever-faster changing world. Conceptual modeling can be used to describe, understand, and cope with increasing levels of uncertainty in our world. Conference topics of interest include the theories of concepts and ontologies underlying conceptual modeling, modeling languages, methods and tools for developing and communicating conceptual models, and techniques for transforming conceptual models into effective implementations.