Critical infrastructures (CIs) are becoming increasingly sophisticated with embedded cyber-physical systems (CPSs) that provide managerial automation and autonomic controls. Yet these advances expose CI components to new cyber-threats, leading to a chain of dysfunctionalities with catastrophic socio-economical implications. We propose a comprehensive architectural model to support the development of incident management tools that provide situation-awareness and cyber-threats intelligence for CI protection, with a special focus on smart-grid CI. The goal is to unleash forensic data from CPS-based CIs to perform some predictive analytics. In doing so, we use some AI (Artificial Intelligence) paradigms for both data collection, threat detection, and cascade-effects prediction.
Smart grid employs ICT infrastructure and network connectivity to optimize efficiency and deliver new functionalities. This evolu- tion is associated with an increased risk for cybersecurity threats that may hamper smart grid operations. Power utility providers need tools for assessing risk of prevailing cyberthreats over ICT infrastructures. The need for frameworks to guide the develop- ment of these tools is essential to define and reveal vulnerability analysis indicators. We propose a data-driven approach for design- ing testbeds to evaluate the vulnerability of cyberphysical systems against cyberthreats. The proposed framework uses data reported from multiple components of cyberphysical system architecture layers, including physical, control, and cyber layers. At the phys- ical layer, we consider component inventory and related physi- cal flows. At the control level, we consider control data, such as SCADA data flows in industrial and critical infrastructure control systems. Finally, at the cyber layer level, we consider existing secu- rity and monitoring data from cyber-incident event management tools, which are increasingly embedded into the control fabrics of cyberphysical systems.
This document reports a technical description of ELVIRA project results obtained as part of Work- package 4.1 entitled “Multi-agent systems for power Grid monitoring”. ELVIRA project is a collaboration between researchers in School of IT at University of Skövde and Combitech Technical Consulting Company in Sweden, with the aim to design, develop and test a testbed simulator for critical infrastructures cybersecurity. This report outlines intelligent approaches that continuously analyze data flows generated by Supervisory Control And Data Acquisition (SCADA) systems, which monitor contemporary power grid infrastructures. However, cybersecurity threats and security mechanisms cannot be analyzed and tested on actual systems, and thus testbed simulators are necessary to assess vulnerabilities and evaluate the infrastructure resilience against cyberattacks. This report suggests an agent-based model to simulate SCADA- like cyber-components behaviour when facing cyber-infection in order to experiment and test intelligent mitigation mechanisms.
In the post September 11 era, the demand for security has increased in virtually all parts of the society. The need for increased security originates from the emergence of new threats which differ from the traditional ones in such a way that they cannot be easily defined and are sometimes unknown or hidden in the “noise” of daily life.
When the threats are known and definable, methods based on situation recognition can be used find them. However, when the threats are hard or impossible to define, other approaches must be used. One such approach is data-driven anomaly detection, where a model of normalcy is built and used to find anomalies, that is, things that do not fit the normal model. Anomaly detection has been identified as one of many enabling technologies for increasing security in the society.
In this thesis, the problem of how to detect anomalies in the surveillance domain is studied. This is done by a characterisation of the surveillance domain and a literature review that identifies a number of weaknesses in previous anomaly detection methods used in the surveillance domain. Examples of identified weaknesses include: the handling of contextual information, the inclusion of expert knowledge and the handling of joint attributes. Based on the findings from this study, a new anomaly detection method is proposed. The proposed method is evaluated with respect to detection performance and computational cost on a number datasets, recorded from real-world sensors, in different application areas of the surveillance domain. Additionally, the method is also compared to two other commonly used anomaly detection methods. Finally, the method is evaluated on a dataset with anomalies developed together with maritime subject matter experts. The conclusion of the thesis is that the proposed method has a number of strengths compared to previous methods and is suitable foruse in operative maritime command and control systems.
In many sensor systems used in urban environments, the amount of data produced can be vast. To aid operators of such systems, high-level information fusion can be used for automatically analyzing the surveillance information. In this paper an anomaly detection approach for finding areas with traffic patterns that deviate from what is considered normal is evaluated. The use of such approaches could help operators in identifying areas with an increased risk for ambushes or improvised explosive devices (IEDs).
We extend the State-Based Anomaly Detection approach by introducing precise and imprecise anomaly detectors using the Bayesian and credal combination operators, where evidences over time are combined into a joint evidence. We use imprecision in order to represent the sensitivity of the classification regarding an object being normal or anomalous. We evaluate the detectors on a real-world maritime dataset containing recorded AIS data and show that the anomaly detectors outperform previously proposed detectors based on Gaussian mixture models and kernel density estimators. We also show that our introduced anomaly detectors perform slightly better than the State-Based Anomaly Detection approach with a sliding window.
In recent years, the development of low-cost GPS transceivers has made it possible to equip all trucks in a fleet with equipment for automatically reporting the status of the trucks to a fleet management system. The downside is that the huge amount of information that is gathered must be evaluated in real-time by an operator. We propose the use of a data-driven anomaly detection algorithm that learns "normal" vehicle behaviour and detects anomalous behaviour such as smuggling, accidents and hijacking, The algorithm is evaluated on real-world data from trucks and commuters equipped with GPS transceivers. The results give initial support to the claim that anomaly detection based on statistical learning can be used to support human descision making. This ability can increase supply chain security by alerting an operator on anomalous vehicle behaviour.
Maritime Domain Awareness is important for both civilian and military applications. An important part of MDA is detection of unusual vessel activities such as piracy, smuggling, poaching, collisions, etc. Today's interconnected sensorsystems provide us with huge amounts of information over large geographical areas which can make the operators reach their cognitive capacity and start to miss important events. We propose and agent-based situation management system that automatically analyse sensor information to detect unusual activity and anomalies. The system combines knowledge-based detection with data-driven anomaly detection. The system is evaluated using information from both radar and AIS sensors.
The increased societal need for surveillance and the decrease in cost of sensors have led to a number of new challenges. The problem is not to collect data but to use it effectively for decision support. Manual interpretation of huge amounts of data in real-time is not feasible; the operator of a surveillance system needs support to analyze and understand all incoming data. In this paper an approach to intelligent video surveillance is presented, with emphasis on finding behavioural anomalies. Two different anomaly detection methods are compared and combined. The results show that it is possible to best increase the total detection performance by combining two different anomaly detectors rather than employing them independently.
In this paper we propose an approach forvdetecting anomalies in data from visual surveillancevsensors. The approach includes creating a structure for representing data, building “normal models” by filling the structure with data for the situation at hand, and finally detecting deviations in the data. The approach allows detections based on the incorporation of a priori knowledge about the situation and on data-driven analysis. The main advantages with the approach compared to earlier work is the low computational requirements, iterative update of normal models and a high explainability of found anomalies. The proposed approach is evaluated off-line using real-world data and the results support that the approach could be used to detect anomalies in real-time applications.
Component-Based Software Development is a conventional way of working for software-intensive businesses and OpenSource Software (OSS) components are frequently considered by businesses for adoption and inclusion in softwareproducts. Previous research has found a variety of practices used to support the adoption of OSS components, in-cluding formally specified processes and less formal, developer-led approaches, and that the practices used continue todevelop. Evolutionary pressures identified include the proliferation of available OSS components and increases in thepace of software development as businesses move towards continuous integration and delivery. We investigate workpractices used in six software-intensive businesses in the primary and secondary software sectors to understand currentapproaches to OSS component adoption and the challenges businesses face establishing effective work practices to eval-uate OSS components. We find businesses have established processes for evaluating OSS components and communitiesthat support more complex and nuanced considerations of the cost and risks of component adoption alongside matterssuch as licence compliance and functional requirements. We also found that the increasing pace and volume of softwaredevelopment within some businesses provides pressure to continue to evolve software evaluation processes.
Reproducible builds (R-Bs) are software engineering practices that reliably create bit-for-bit identical binary executable files from specified source code. R-Bs are applied in someopen source software (OSS) projects and distributions to allow verification that the distrib-uted binary has been built from the released source code. The use of R-Bs has been advo-cated in software maintenance and R-Bs are applied in the development of some OSS secu-rity applications. Nonetheless, industry application of R-Bs appears limited, and we seekto understand whether awareness is low or if significant technical and business reasonsprevent wider adoption. Through interviews with software practitioners and business man-agers, this study explores the utility of applying R-Bs in businesses in the primary and sec-ondary software sectors and the business and technical reasons supporting their adoption.We find businesses use R-Bs in the safety-critical and security domains, and R-Bs are valu-able for traceability and support collaborative software development. We also found thatR-Bs are valued as engineering processes and are seen as a badge of software quality, butwithout a tangible value proposition. There are good engineering reasons to use R-Bs inindustrial software development, and the principle of establishing correspondence betweensource code and binary offers opportunities for the development of further applications.
Software interoperability is commonly achieved through the implementation of standards for communication protocols or data representation formats. Standards documents are often complex, difficult to interpret, and may contain errors and inconsistencies, which can lead to differing interpretations and implementations that inhibit interoperability. Through a case study of two years of activity in the Apache PDFBox project we examine day-to-day decisions made concerning implementation of the PDF specifications and standards in a community open source software (OSS) project. Thematic analysis is used to identify semantic themes describing the context of observed decisions concerning interoperability. Fundamental decision types are identified including emulation of the behaviour of dominant implementations and the extent to which to implement the PDF standards. Many factors influencing the decisions are related to the sustainability of the project itself, while other influences result from decisions made by external actors, including the developers of dependencies of PDFBox. This article contributes a fine grained perspective of decision-making about software interoperability by contributors to a community OSS project. The study identifies how decisions made support the continuing technical relevance of the software, and factors that motivate and constrain project activity.
Open hardware and open source software platforms bring benefits to both implementers and users in the form of system adaptability and maintainability, and through the avoidance of lock-in, for example. Development of the \riscv\ Instruction Set Architecture and processors during the last ten years has made the implementation of a desktop computer using open hardware, including open processors, and open source software an approaching possibility. We use the SiFive Unmatched development board and Ubuntu Linux, and the recorded experiences of system builders using the Unmatched board to explore the extent to which it is possible to create an open desktop computer. The work identifies current limitations to implementing an open computer system, which lie mainly at the interface between the operating system and hardware components. Potential solutions to the challenges uncovered are proposed, including greater consideration of openness during the early stages of product design. A further contribution is made by an account of the synergies arising from open collaboration in a private-collective innovation process.
The majority of contributions to community open source software (OSS) projects are made by practitioners acting on behalf of companies and other organisations. Previous research has addressed the motivations of both individuals and companies to engage with OSS projects. However, limited research has been undertaken that examines and explains the practical mechanisms or work practices used by companies and their developers to pursue their commercial and technical objectives when engaging with OSS projects. This research investigates the variety of work practices used in public communication channels by company contributors to engage with and contribute to eight community OSS projects. Through interviews with contributors to the eight projects we draw on their experiences and insights to explore the motivations to use particular methods of contribution. We find that companies utilise work practices for contributing to community projects which are congruent with the circumstances and their capabilities that support their short- and long-term needs. We also find that companies contribute to community OSS projects in ways that may not always be apparent from public sources, such as employing core project developers, making donations, and joining project steering committees in order to advance strategic interests. The factors influencing contributor work practices can be complex and are often dynamic arising from considerations such as company and project structure, as well as technical concerns and commercial strategies. The business context in which software created by the OSS project is deployed is also found to influence contributor work practices.
The maritime industry is experiencing one of its longest and fastest periods of growth. Hence, the global maritime surveillance capacity is in a great need of growth as well. The detection of vessel activity is an important objective of the civil security domain. Detecting vessel activity may become problematic if audit data is uncertain. This paper aims to investigate if Bayesian networks acquired from expert knowledge can detect activities with a signature-based detection approach. For this, a maritime pilot-boat scenario has been identified with a domain expert. Each of the scenario’s activities has been divided up into signatures where each signature relates to a specific Bayesian network information node. The signatures were implemented to find evidences for the Bayesian network information nodes. AIS-data with real world observations have been used for testing, which have shown that it is possible to detect the maritime pilot-boat scenario based on the taken approach.
Software reference implementations of ICT standards have an important role for verifying that a standard is implementable, supporting interoperability testing among other implementations, and providing feedback to the standard development process. Providing reference implementations and widely used implementations of a standard as Open Source Software also promotes wide deployment in software systems, avoidance of different lock-in effects, interoperability, and longevity of systems and associated digital assets. In this paper results are reported on the availability of reference implementations and widely deployed implementations provided as Open Source Software for standards issued by different standards setting organisations. Specifically, findings draw from observations and analyses related to software implementations for identified standards issued by ETSI, IEC, IEEE, IETF, ISO, ITU-T, OASIS, and W3C.
Software reference implementations of ICT standards have an important role for verifying that a standard is implementable, supporting interoperability testing among other implementations, and providing feedback to the standard development process. Providing reference implementations and widely used implementations of a standard as Open Source Software promotes wide deployment in software systems, interoperability, longevity of systems and associated digital assets, and avoidance of different lock-in effects. In this paper results are reported on the availability of, and perceptions and practices concerning, reference implementations and widely deployed implementations provided as Open Source Software for standards issued by different standards setting organisations. Specifically, findings draw from observations and analyses related to software implementations for identified standards and policy statements, issued by ETSI, IEC, IEEE, IETF, ISO, ITU-T, OASIS, and W3C.
Web analytics technologies provide opportunities for organisations to obtain information about users visiting their websites in order to understand and optimise web usage. Use of such technologies often leads to issues related to data privacy and potential lock-in to specific suppliers and proprietary technologies. Use of open source software (OSS) for web analytics can create conditions for avoiding issues related to data privacy and lock-in, and thereby provides opportunities for a long-term sustainable solution for organisations both in the public and private sectors. This paper characterises use of and engagement with OSS projects for web analytics. Specifically, we contribute a characterisation of use of OSS licensed web analytics technologies in Swedish government authorities, and a characterisation of organisational engagement with the Matomo OSS project for web analytics.
Real-time communication (RTC) technologies for the web provide opportunities for individuals and organisations to work and collaborate remotely, and the need for such technologies has recently increased. Use of RTC technologies and tools for the web involves a number of challenges concerning data privacy and lock-in effects, such as dependency to specific suppliers and proprietary technologies. Use of open standards for RTC and open source software (OSS) implementing such standards can create conditions for avoiding issues related to data privacy and lock-in, and thereby provides opportunities for long-term sustainable solutions. The paper characterises how engagement with standardisation of WebRTC in the context of IETF and W3C is related to engagement with the WebRTC OSS project.
In the effort to provide simulation support to the future Network Based Defence (NBD)1 that are currently being applied by the Swedish Armed Forces (SwAF), the authors opinion is that simulation should be treated as any other services and use the same architectural requirements addressed in the SwAF Enterprise Architecture (FMA)2 and in subsidiary documents.
The choice so far for simulation is the High Level Architecture (HLA). During the author’s participation in ongoing work supporting NBD, questions have gradually been raised if HLA is the simulation path to walk. In the Core Enterprise Services (CES) and FMA Services IT-Kernel, core services are specified and HLA do address a lot of non-simulation specific services giving unwanted redundancy. However, the services already defined may with some enhancements deliver the same services addressed within CES and FMA Services IT-Kernel. Furthermore, HLA also comes with the Federation Development and Execution Process (FEDEP) that introduce process methodology to build HLA federations. Basically FEDEP is a software development process for distributed systems. The Next Generation HLA could be more than just a simulation standard if it utilizes the FMA ideas and avoids the green HLA elephant3.
In this paper the authors present the ongoing work, as it stands today, with Service Oriented Simulations, that is an outlook for simulation using the architectural structuring, services, components and infrastructures concepts evolving in FMA and with the Global Information Grid (GIG) Enterprise Services (GES) in mind. The focus is to identify simulation services that encapsulate the core features of simulation. Thereby reducing redundancy in methodology and service as well as enabling interoperable simulation support for the whole system lifecycle – Acquisition, Development, Training, Planning, In-the-Field decision support, System removal – within NBD, entailing that the architecture for simulation is uniform regardless of its application and giving end-users the capability to focus on what to simulate instead of how to simulate.
This document reports a technical description of ELVIRA project results obtained as part of Work-package 3.1&3.2 entitled “Taxonomy of Critical Infrastructure (Taxonomy of events + Taxonomy of CI component and relationship)”. ELVIRA project is a collaboration between researchers in School of IT at University of Skövde and Combitech Technical Consulting Company in Sweden, with the aim to design, develop and test a testbed simulator for critical infrastructures cybersecurity.
This document reports a technical description of ELVIRA project results obtained as part of Work-package 2.1 entitled “Complex Dependencies Analysis”. In this technical report, we review attempts in recent researches where connections are regarded as influencing factors to IT systems monitoring critical infrastructure, based on which potential dependencies and resulting disturbances are identified and categorized. Each kind of dependence has been discussed based on our own entity based model. Among those dependencies, logical and functional connections have been analysed with more details on modelling and simulation techniques.
Power grids form the central critical infrastructure in all developed economies. Disruptions of power supply can cause major effects on the economy and the livelihood of citizens. At the same time, power grids are being targeted by sophisticated cyber attacks. To counter these threats, we propose a domain-specific language and a repository to represent power grids and related IT components that control the power grid. We apply our tool to a standard example used in the literature to assess its expressiveness.
Open source software (OSS) and open standards have become increasingly important for addressing challenges related to lock-in, interoperability and long-term maintenance of systems and associated digital assets. OSS projects operate under different conditions and many projects and organisations consider successful governance and strategic involvement with projects to constitute major challenges. Today, many companies seek to establish work practices which facilitate strategic engagement with OSS projects. Based on findings from collaborative research which draws from rich insights and extensive experiences from practice, the paper presents seven actionable strategies for organisations that seek to leverage long-term involvement with OSS projects.
Technological progress poses unique challenges for the public sector. New technology should be adopted, but it must always be done within the framework of good administration. It follows laws governing public administration must be continuously adapted. Sweden recently amended its secrecy legislation to facilitate the use of third-party cloud solutions by public authorities. When the amendment was enacted, most public sector organisations had already been using external cloud solutions for a long time. Today, there is as much pressure on authorities to implement AI technology as there ever was to move administration into the cloud. This paper uses traditional legal methodology to investigate if the Swedish secrecy legislation adequately enables the use of cloud-based GenAI solutions. Findings indicate that the recent amendment is likely insufficient and that there are significant practical hurdles for the application of the law, particularly with services from global cloud providers. The paper contributes to the understanding of Swedish law, and of the difficulties that can occur anywhere when policy makers and legislators do not move in tandem.
The use of technology to assist human decision making has been around for quite some time now. In the literature, models of both technological and human aspects of this support can be identified. However, we argue that there is a need for a unified model which synthesizes and extends existing models. In this paper, we give two perspectives on situation analysis: a technological perspective and a human perspective. These two perspectives are merged into a unified situation analysis model for semi-automatic, automatic and manual decision support (SAM)2. The unified model can be applied to decision support systems with any degree of automation. Moreover, an extension of the proposed model is developed which can be used for discussing important concepts such as common operational picture and common situation awareness.
The importance of standards, and especially ICT standards, in the IoT domain is widely recognised. Implementations of standard specifications provided as Open Source Software (OSS) promote interoperability and longevity of systems and create conditions for avoiding lock-in, and industrial involvement is important since it can affect community dynamics and will to contribute. The overarching goal of this study is to characterize the industrial leadership and involvement in the LwM2M (Lightweight machine to machine) ecosystem. Specifically, the main focus of the study is on involvement with OSS projects implementing LwM2M elements by individuals who cooperate in projects that implement or use the LwM2M standard in their projects. This can be done by analyzing the commits to the git repository, the bugs issued and commented in the bug-tracking system and the pull requests performed in the project. Techniques will be applied to merge authors using different identities (i.e., different e-mail addresses). By means of identifying the affiliation of those individuals we plan to analyze the involvement of companies in those projects, and thus how they are present in an IoT standard, in this case LwM2M.
This work focus on situation prediction in data fusion systems. A hypothesis evaluation algorithm based on artificial neural networks is introduced. It is evaluated and compared to an algorithm based on Bayesian networks which is commonly used. It is also compared to a simple "dummy" algorithm. For the tests, a computer based model of the environment, including protected objects and enemy objects, is implemented. The model handles the navigation of the enemy objects and situational data is extracted from the environment and provided for the hypothesis evaluation algorithms. It was the belief of the author that ANNs would be suitable for hypothesis evaluation if a suitable data representation of the environment were used. The representation requirements include pre processing of the situational data to eliminate the need for variable input size to the algorithm. This because ANNs poorly handles this; the whole network have to be retrained each time the amount of input data changes. The results show that ANNs performed best of the three and hence seems to be suitable for hypothesis evaluation.