Technical debt refers to various weaknesses in the design or implementation of a system resulting from trade-offs during software development usually for a quick release. Accumulating such debt over time without reducing it can seriously hamper the reusability and maintainability of the software. The aim of this study is to understand the state of the technical debt in the development of self-driving miniature cars so that proper actions can be planned to reduce the debt to have more reusable and maintainable software. A case study on a selected feature from two self-driving miniature car development projects is performed to assess the technical debt. Additionally, an interview study is conducted involving the developers to relate the findings of the case study with the possible root causes. The result of the study indicates that "the lack of knowledge" is not the primary reason for the accumulation of technical debt from the selected code smells. The root causes are rather in factors like time pressure followed by issues related to software/hardware integration and incomplete refactoring as well as reuse of legacy, third party, or open source code.
In this paper, we define hesitant fuzzy partitions (H-fuzzy partitions) to consider the results of standard fuzzy clustering family (e.g. fuzzy c-means and intuitionistic fuzzy c-means). We define a method to construct H-fuzzy partitions from a set of fuzzy clusters obtained from several executions of fuzzy clustering algorithms with various initialization of their parameters. Our purpose is to consider some local optimal solutions to find a global optimal solution also letting the user to consider various reliable membership values and cluster centers to evaluate her/his problem using different cluster validity indices.
In recent years, control system in the automation industries has become more and more useful, covering a wide range of fields, for example, industrial instrumentation, control and monitoring systems. Vision systems are used nowadays to improve products quality control, saving costs, time, and obtaining a better accuracy than a human operator in the manufacturing process of companies. Combining a vision system with a suitable automated system allow companies to cover a wide range of products and rapid production. All these factors are considered in this project.
The aim of this project is to upgrade the functionality of a Nokia-Cell, which was used in a quality control process for the back shells of Nokia cell phones. The project includes design, upgrade and implementation of a new system in order to make the cell work properly. The Nokia-Cell is composed of the following basic modules: vision and image recognition system, automation system devices (PC and PLC, robot), and other mechatronics devices. The new system will consist of a new camera, due to the poor connectivity and quality of the old camera. For the same reason, a new PC will replace two older ones for communication and vision recognition. The new system will also include a new PLC of Beckhoff to replace the aging one of Omron so as to facilitate the connections using the same language. In addition, IEC-61499 Function Blocks standard is adopted for programming the Nokia-Cell.
It is expected that the results of this project will contribute to both research and education in the future. In addition, it would be correctly to apply the results to industries in vision-based quality control systems.
The aim of this paper is to provide a thinking road-map and a practical guide to researchers and practitioners working on hierarchical forecasting problems. Evaluating the performance of hierarchical forecasts comes with new challenges stemming from both the structure of the hierarchy and the application context. We discuss several relevant dimensions for researchers and analysts: the scale and units of the time series, the issue of intermittency, the forecast horizon, the importance of multiple evaluation windows and the multiple objective decision context. We conclude with a series of practical recommendations.
Industrial automated systems are mostly designed and pre-adjusted to always work at their maximum production rate. This leaves room for important energy consumption reductions considering the production rate variations of factories in reality. This article presents a multi-objective optimization application targeting cycle time and energy consumption of a robotic cell. A novel approach is presented where an existing emulation model of a fictitious robotic cell was extended with low-level electrical components modeled and encapsulated as FMUs. The model, commanded by PLC and Robot Control software, was subjected to a multi-objective optimization algorithm in order to find the Pareto front between energy consumption and production rate. The result of the optimization process allows selecting the most efficient energy consumption for the robotic cell in order to achieve the required cycle.
Discrete Event Simulation is a comprehensive tool for the analysis and design of manufacturing systems. Over the years, considerable efforts to improve simulation processes have been made. One step in these efforts is the standardisation of the output data through the development of an appropriate system which presents the results in a standardised way. This paper presents the results of a survey based on simulation projects undertaken in the automotive industry. In addition, it presents the implementation of an automated output data-handling system which aims to simplify the project’s documentation task for the simulation engineers and make the results more accessible for other stakeholders.
An important factor for success in project-based learning (PBL) is that the involved project groups establish an atmosphere of social interaction in their working environment. In PBL-scenarios situated in distributed environments, most of a group's work-processes are mediated through the use of production-focused tools that are unconcerned with the important informal and social aspects of a project. On the other hand, there are plenty of tools and platforms that focus on doing the opposite and mainly support informal bonding (e.g., Facebook), but these types of environments can be obtrusive and contain distractions that can be detrimental to a group's productivity and are thus often excluded from working environments. The aim of this paper is to examine how a game-based multi-user environment (MUVE) can be designed to support project-based learning by bridging the gap between productivity-focused and social software. To explore this, the authors developed a game-based MUVE which was evaluated in a PBL-scenario. The result of the study revealed several crucial design elements that are needed to make such a MUVE work effectively, and that the acceptance towards game-based MUVEs is high, even with a rudimentary execution.
This thesis analyses if the change of auditory feedback can improve the effectiveness of performance in the interaction with a non-visual system, or with a system used by individuals with visual impairment. Two prototypes were developed, one with binaural audio and the other with stereo audio. The interaction was evaluated in an experiment where 22 participants, divided into two groups, performed a number of interaction tasks. A post-interview were conducted together with the experiment. The result of the experiment displayed that there were no great difference between binaural audio and stereo regarding the speed and accuracy of the interaction. The post-interviews displayed interesting differences in the way participants visualized the virtual environment that affected the interaction. This opened up interesting questions for future studies.
In this study, the inability to in a future meet the electricity demand and the urge to change the consumption behavior considered. In a smart grid context there are several possible ways to do this. Means include ways to increase the consumer’s awareness, add energy storages or build smarter homes which can control the appliances. To be able to implement these, indications on how the future consumption will be could be useful. Therefore we look further into how a framework for short-term consumption predictions can be created using electricity consumption data in relation to external factors. To do this a literature study is made to see what kind of methods that are relevant and which qualities is interesting to look at in order to choose a good prediction method. Case Based Reasoning seemed to be able to be suitable method. This method was examined further and built using relational databases. After this the method was tested and evaluated using datasets and evaluation methods CV, MBE and MAPE, which have previously been used in the domain of consumption prediction. The result was compared to the results of the winning methods in the ASHRAE competition. The CBR method was expected to perform better than what it did, and still not as good as the winning methods from the ASHRAE competition. The result showed that the CBR method can be used as a predictor and has potential to make good energy consumption predictions. and there is room for improvement in future studies.
Component-Based Software Development is a conventional way of working for software-intensive businesses and OpenSource Software (OSS) components are frequently considered by businesses for adoption and inclusion in softwareproducts. Previous research has found a variety of practices used to support the adoption of OSS components, in-cluding formally specified processes and less formal, developer-led approaches, and that the practices used continue todevelop. Evolutionary pressures identified include the proliferation of available OSS components and increases in thepace of software development as businesses move towards continuous integration and delivery. We investigate workpractices used in six software-intensive businesses in the primary and secondary software sectors to understand currentapproaches to OSS component adoption and the challenges businesses face establishing effective work practices to eval-uate OSS components. We find businesses have established processes for evaluating OSS components and communitiesthat support more complex and nuanced considerations of the cost and risks of component adoption alongside matterssuch as licence compliance and functional requirements. We also found that the increasing pace and volume of softwaredevelopment within some businesses provides pressure to continue to evolve software evaluation processes.
Recently, a huge amount of social networks have been made publicly available. In parallel, several definitions and methods have been proposed to protect users’ privacy when publicly releasing these data. Some of them were picked out from relational dataset anonymization techniques, which are riper than network anonymization techniques. In this paper we summarize privacy-preserving techniques, focusing on graph-modification methods which alter graph’s structure and release the entire anonymous network. These methods allow researchers and third-parties to apply all graph-mining processes on anonymous data, from local to global knowledge extraction.
This book constitutes the refereed proceedings of the 12th International IFIP WG 2.13 International Conference on Open Source Systems, OSS 2016, held in Gothenburg, Sweden, in May/June 2016. The 13 revised full papers presented were carefully reviewed and selected from 38 submissions. The papers cover a wide range of topics related to free, libre, and open source software, including: organizational aspects of communities; organizational adoption; participation of women; software maintenance and evolution; open standards and open data; collaboration; hybrid communities; code reviews; and certification.
Aircraft Combat Survivability in military air operations is concerned with survival of the own aircraft. This entails analysis of information, detection and estimation of threats, and the implementation of actions to counteract detected threats. Beyond visual range weapons can today be fired from one hundred kilometers away, making them difficult to detect and track. One approach for providing early warnings of such threats is to analyze the kinematic behavior of enemy aircraft in order to detect situations that may point to malicious intent. In this paper we investigate the use of dynamic Bayesian networks for detecting hostile aircraft behaviors.
Operatörer kommer sannolikt fortsätta att vara en integral del av industriell montering inom en överskådlig framtid. Detta beror delvis på allt kortare livscykler och ökad variation på produkter gör det svårare att automatisera produktionen. Samtidigt som tekniska framsteg möjliggör mer digitalisering så ökar efterfrågan på individuellt designade produkter. De här förändringarna, i kombination med en global konkurrens, skapar en ökad press på operatörer att hantera stora mängder information inom en kort tidsram. Förstärkt verklighet (förkortat AR från engelska ”augmented reality”) har identifierats som en teknologi som effektivt kan presentera monteringsinstruktioner för operatörer. Smarta AR glasögon (förkortat ARSG från engelska ”AR smart glasses”) är en implementering av AR som är lämplig för operatörer eftersom de inte behöver använda sina händer för att bära dem och för att de kan presentera individuella instruktioner i rätt kontext direkt i deras verkliga arbetsmiljö. Det finns industriföretag som redan har börjat använda ARSG i produktion och det finns många förutsägelser om att ARSG kommer fortsätta att växa. För att kunna fullt integrera ARSG som ett bland många verktyg i en modern och komplex fabrik så måste dock ett företag ta hänsyn till ett flertal perspektiv. Den här avhandlingen undersöker både operatörs-perspektivet och beredningsperspektivet för att stödja industrins investeringsbeslut rörande ARSG.
Målet med den här licentiatavhandlingen är att bidra med en grund för ett ramverk som kan möjliggöra för industrin att välja, integrera och underhålla ARSG i produktion som ett värdeskapande operatörsstöd. Det här åstadkoms genom att undersöka den teoretiska grunden för ARSG-relaterad teknologi och dess mognad och även operatörernas behov i ARSG när de används i montering. Det filosofiska paradigm som har följts är pragmatism. Metodologin som har används är designvetenskap, kopplat till forskningsparadigmet blandade metoder. Data har samlats in genom demonstratorexperiment, intervjuer, observationer och litteraturstudier. Den här avhandlingen ger partiellt svar till det övergripande forskningsmålet.
Avhandlingen visar att ämnet är möjligt att genomföra, relevant för industrin och ett originellt vetenskapligt bidrag. Observationer, intervjuer och en litteraturstudie gav en översikt av operatörsperspektivet. Några exempel från resultaten att lyfta fram är att operatörer är villiga att arbeta med ARSG, att operatörer behöver hjälp med att avlära sig gamla uppgifter såväl som att lära sig nya och att den optimala viktspridningen av ARSG beror på operatörernas huvudpositionering. Bland de preliminära resultaten från beredningsperspektivet inkluderas en generell avsaknad av standarder för AR gällande vertikala industriella tillämpningar, förbättrade verktyg för instruktionsskapande som stödjer snabbare instruktionsgenerering och stora variationer gällande specifikationer i tillgängliga ARSG.
Framtida arbete inkluderar ett komplett svar till beredningsperspektivet samt att kombinera alla resultaten för att skapa ett ramverk för ARSG integration i industrin.
Studien ämnar undersöka hur en LiveCD kan stödja en säkrare datormiljö för distansarbetarepå ett enkelt sätt. Distansarbetare arbetar utanför arbetsplatsens säkrare datormiljö vilket gördem till en mer utsatt grupp av datoranvändare. Många av dagens datoranvändare tyckerdessutom säkerhet relaterat till datorer och nätverk är komplicerat. Implementerad lösningtillhandahåller därför möjligheten att med hjälp av en LiveCD och ett USB-minne få tillgångtill både en säker fjärranslutning och fjärrskrivbord, genom att enbart skriva in ett lösenord.Tjänsterna konfigureras alltså automatiskt i bakgrunden; användaren behöver inte hålla redapå mer än ett lösenord. En LiveCD är också ett skrivskyddat medium vilket innebär attskadlig kod har svårt att få fäste i systemet. Resultatet är att distansarbetare med olika behovoch kunskapsnivå gällande dator- och nätverksäkerhet erbjuds möjligheten till en säkraredatormiljö på ett enkelt sätt.
To better optimize and control the renewable energy system and its integration with traditional grid systems and other energy systems, corresponding technologies are needed to meet its growing practical application requirements: decentralized management and control, support for decentralized decision-making, fine-grained and timely data sharing, maintain data and business privacy, support fast and low-cost electricity market transactions, maintain the security and reliability of system operation data, and prevent malicious cyberattacks. Blockchain is based on core technologies such as distributed ledgers, asymmetric encryption, consensus mechanisms, and smart contracts and has some excellent features such as decentralization, openness, independence, security, and anonymity. These characteristics seem to meet the technical requirements of future renewable energy systems partially. This chapter will systematically review how blockchain technology can potentially solve the challenges with decentralized solutions for future renewable energy systems and show a guideline to implement blockchain-based corresponding applications for future renewable energy.
Genom att använda sig utav en virtuell kopia utav en produktionscell kan programmering och funktionstester av olika paneler testas i ett tidigt stadie. En virtuell kopia bidrar också till enklare felsökning och minskning av kostnader vid idrifttagning. Tanken med projektet är att undersöka i vilken utsträckning som emuleringsmodellen kan ersätta den riktiga cellen vid ett funktionstest för leverantören. Det som också undersöks är i vilken utsträckning riktiga CAD-ritningar kan användas och vilka krav som ställs på ritningarna för att underlätta emulering. Projektet hade flera utmaningar och en av dem som uppkom under projektets gång var problemet med att det inte gick att emulera säkerhetssystemen. Detta löstes genom att bygla alla säkerhetskretsar i PLC-programmet. En viktig del i emulering är kommunikation mellan de olika programvarorna i systemet. I projektets visade det sig fördelaktigt att dela upp programmen i emuleringssystemet för att fördela resurserna över tre datorer. Att använda sig utav en emuleringsmodell istället för en riktig produktionscell är fortfarande i forskningsstadiet men genom projektet har många användningsområden identifierats och skulle kunna förändra idrifttagning i framtiden.
This paper presents a study focused on comparing driving behavior of expert and novice drivers in a mid-range driving simulator with the intention of evaluating the validity of driving simulators for driver training. For the investigation, measurements of performance, psychophysiological measurements, and self-reported user experience under different conditions of driving tracks and driving sessions were analyzed. We calculated correlations
between quantitative and qualitative measures to enhance the reliability of the findings. The experiment was conducted involving 14 experienced drivers and 17 novice drivers. The results indicate that driving behaviors of expert and novice drivers differ from each other in several ways but it heavily depends on the characteristics of the task. Moreover, our belief is that the analytical framework proposed in this paper can be used as a tool for selecting appropriate driving tasks as well as for evaluating driving performance in driving simulators.
Den artificiella intelligensen i spel brukar ofta använda sig utav regelbaserade tekniker för dess beteende. Detta har gjort att de artificiella agenterna blivit förutsägbara, vilket är väldigt tydligt för sportspel. Det här arbetet har utvärderat ifall inlärningstekniken Q-learning är bättre på att spela fotboll än en regelbaserade tekniken tillståndsmaskin. För att utvärdera detta har en förenklad fotbollssimulering skapats. Där de båda lagen har använts sig av varsin teknik. De båda lagen har sedan spelat 100 matcher mot varandra för att se vilket lag/teknik som är bäst. Statistik ifrån matcherna har använts som undersökningsresultat. Resultatet visar att Q-learning är en bättre teknik då den vinner flest match och skapar flest chanser under matcherna. Diskussionen efteråt handlar om hur användbart Q-learning är i ett spelsammanhang.
Many applications need to detect and respond to occurring events and combine these event occurrences into new events with a higher level of abstraction. Specifying how events can be combined is often supported by design tools specific to the current event processing engine. However, the issue of ensuring that the combinations of events provide the system with the correct combination of information is often left to the developer to analyze. We argue that analyzing correctness of event composition is a complex task that needs tool support. In this paper we present a novel development tool for specifying composition of events with time constraints. One key feature of our tool is to automatically transform composite events for real-time systems into a timed automaton representation. The timed automaton representation allow us to check for design errors, for example, whether the outcome of combining events with different operators in different consumption policies is consistent with the requirement specification
I dagens samhälle är datorer ett naturligt inslag i vår vardag. För de flesta anses datorn vara ett verktyg för att hjälpa dem genom arbetet såväl som i vardagen. Dock finns det en mörkare sida där personer använder sig utav datorn för att begå brott. Den så kallade IT-relaterade brottsligheten ökar och ökar och enligt Brå:s rapport från 2016 har en ökning på 949 % skett i Sverige mellan 2006 till 2015 enligt den officiella kriminalstatistiken vad gäller brott som har IT-inslag (Andersson, Hedqvist, Ring & Skarp, 2016).
För att få fast förövarna krävs det medel för att kunna bevisa att ett brott har begåtts. Ett sätt att göra detta är att gå in i datorn för att leta efter bevis. Om den misstänkte förövaren känner till att det finns möjlighet för denne att komma att bli granskad vad händer då? Möjligheter finns att förövaren försöker göra det så svårt som möjligt att ta sig in datorn. Detta kan då ske genom att kryptera systemet genom att använda sig av en så kallad krypteringsalgoritm för att låsa hårddisken. Denna kryptering kan vara väldigt svår att dekryptera och det kan vara enklare att försöka få tag i det rätta lösenordet istället.
Denna studie har till syfte att utveckla en modell för lösenordsklassificering. Genom denna modell kan strategier som används när användare skapar sina lösenord identifieras och klassificeras. Detta bidrar till en ökad kunskap om strategier användare har när de skapar lösenord. Då fulldiskkryptering börjar bli en vanligare metod för att hindra någon obehörig från att ta sig in i systemet finns förhoppningen om att modellen ska kunna användas och utvecklas till att skapa ett ramverk för att underlätta arbetet för forensikerna hos polisen samt andra rättsvårdande myndigheter. Med denna modell kan olika strategier som olika typer av användare använder sig av när de skapar lösenord vara av sådan karaktär att de kan klassificeras in i en egen kategori. Om en sådan klassificering kan göras skulle det underlätta arbetet för IT-forensikerna och påskynda processen med att knäcka lösenord.
Studien utförs genom att använda en kvalitativ metod samt validering utav modellen. Genom kvalitativa intervjuer samlas information in som sedan analyseras och används för att utveckla en modell för lösenordsklassificering. Arbetet med att utveckla en modell för lösenordsklassificering har bestått av en iterativ process där återkoppling gjorts gentemot de olika intervjuobjekten. Ett utkast till en modell med grund i befintlig forskning skapades. Utkastet diskuterades sedan med de olika intervjuobjekten, som deltagit i studien, i en iterativ process där modellen uppdaterades och återkopplades mot de olika intervjuobjekten. Validering av modellen har genomförts genom att fånga in riktiga lösenord som läckts ut på Internet och sedan testa dessa lösenord mot modellen för lösenordsklassificering.
The HUD is what allows players to interact with the game world and therefore the visual representation of it is of importance to usability. Usability being broken down into three components: effectiveness, efficiency and satisfaction. To study the subject a third person action game was made for the purpose. The game contained two different HUD versions to test different approaches to UI design. Results for the study were, in relation to usability, inconclusive due to a lack of participants and varying degrees of experience within the pool of participants. Preferences were gathered however, and preferences towards the stylized HUD were shown. Further study is promising as other genres could more easily adapt theories from other software fields.
The HUD is what allows players to interact with the game world and therefore thevisual representation of it is of importance to usability. Usability being broken down intothree components: effectiveness, efficiency and satisfaction. To study the subject athird person action game was made for the purpose. The game contained two differentHUD versions to test different approaches to UI design. Results for the study were, inrelation to usability, inconclusive due to a lack of participants and varying degrees ofexperience within the pool of participants. Preferences were gathered however, andpreferences towards the stylized HUD were shown. Further study is promising as othergenres could more easily adapt theories from other software fields.
This memo specifies Network Time Security (NTS), a mechanism for using Transport Layer Security (TLS) and Authenticated Encryption with Associated Data (AEAD) to provide cryptographic security for the client-server mode of the Network Time Protocol (NTP).
NTS is structured as a suite of two loosely coupled sub-protocols. The first (NTS Key Establishment (NTS-KE)) handles initial authentication and key establishment over TLS. The second (NTS Extension Fields for NTPv4) handles encryption and authentication during NTP time synchronization via extension fields in the NTP packets, and holds all required state only on the client via opaque cookies.
Att ha grafiska applikationer i webben har blivit allt mer vanligt sedan World Wide Web kom till i slutet på 80-talet. Till en början handlade det om effektfulla interaktiva element så som reklamskyltar, logotyper och menyknappar. Idag år 2015 har webbläsarna utvecklats så pass långt att inga tredjepartsprogram krävs för att interaktiv grafik ska fungera, vilket tidigare var fallet. Grafiska funktioner och bibliotek finns nu istället inbyggda i webbläsaren. De tekniker som denna rapport/arbete ska behandla är Canvas och WebGL. Dessa är tekniker som används för att presentera interaktiv grafik på webben. WebGL är ett grafiskt bibliotek som bygger på ett känt grafiskt bibliotek vid namnet OpenGL, men konstruerat för webben. Grafiken är hårdvaruaccelererad precis som OpenGL, vilket innebär att tekniken kan åstadkomma relativt kraftfull grafik för att vara en webbapplikation. För en utbildad webbutvecklare kan WebGL upplevas som en svårare värld jämfört med Canvas som ligger närmare en webbutvecklares kunskapsområde. Canvas har även en större tillgänglighet bland webbläsare än WebGL. Detta arbete ska redovisa hur dessa två tekniker förhåller sig till varandra i utritningshastighet tillsammans med en bildteknik kallad Atlas. Atlas teknik är enkelt förklarat när ett bildobjekt är som en atlas med flertal bildobjekt där i som hade kunnat motsvara separata bildobjekt. Detta examensarbete kommer jämföra alla fallen i ett experiment för att kunna ge svar på hur prestanda i utritningshastighet står sig mellan teknikerna Canvas och WebGL med eller utan Atlas teknik.
Over the years, a number of open standards have been developed and implemented in software for addressing a number of challenges, such as lock-in, interoperability and longevity of software systems and associated digital artefacts. Understanding organisational involvement and collaboration in standardisation is important for informing any future policy and organisational decisions concerning involvement in standardisation. The overarching goal of the study is to establish how organisations contribute to open standards development through editorship. Specifically, the focus is on open standards development in W3C. Through an analysis of editorship for all W3C recommendations we contribute novel findings concerning organisational involvement and collaboration, and highlight contributions from different types of organisations and countries for headquarter of each organisation. We make three principal contributions. First, we establish an overall characterisation of organisational involvement in W3C standardisation. Second, we report on organisational involvement in W3C standardisation over time. Third, we establish organisational collaboration in W3C standardisation through social network analysis.
It is known that standards implemented in Open Source software (OSS) can promote a competitive market, reduce the risk for lock-in and improve interoperability, whilst there is limited knowledge concerning the relationship between standards and their implementations in OSS. In this paper we report from an ongoing case study conducted in the context of the ORIOS (Open Source software Reference Implementations of Open Standards) project in which influences between OSS communities and software standard communities are investigated. The study focuses on the Drupal project and three of its implemented standards (RDFa, CMIS, and OpenID).
Lean and simulation analysis are driven by the same objective, how to better design and improve processes making the companies more competitive. The adoption of lean has been widely spread in companies from public to private sectors and simulation is nowadays becoming more and more popular. Several authors have pointed out the benefits of combining simulation and lean, however, they are still rarely used together in practice. Optimization as an additional technique to this combination is even a more powerful approach especially when designing and improving complex processes with multiple conflicting objectives. This paper presents the mutual benefits that are gained when combining lean, simulation and optimization and how they overcome each other´s limitations. A framework including the three concepts, some of the barriers for its implementation and a real-world industrial example are also described.
Background: Roles in the doctor-patient relationship are changing and patient participation in health care is increasingly emphasized. Electronic health (eHealth) services such as patient accessible electronic health records (PAEHRs) have been implemented to support patient participation. Little is known about practical use of PAEHR and its effect on roles of doctors and patients. Objective: This qualitative study aimed to investigate how physicians view the idea of patient participation, in particular in relation to the PAEHR system. Hereby, the paper aims to contribute to a deeper understanding of physicians' constructions of PAEHR, roles in the doctor-patient relationship, and levels and limits of involvement. Methods: A total of 12 semistructured interviews were conducted with physicians in different fields. Interviews were transcribed, translated, and a theoretically informed thematic analysis was performed. Results: Two important aspects were identified that are related to the doctor-patient relationship: roles and involvement. The physicians viewed their role as being the ones to take on the responsibility, determining treatment options, and to be someone who should be trusted. In relation to the patient's role, lack of skills (technical or regarding medical jargon), motives to read, and patients' characteristics were aspects identified in the interviews. Patients were often referred to as static entities disregarding their potential to develop skills and knowledge over time. Involvement captures aspects that support or hinder patients to take an active role in their care. Conclusions: Literature of at least two decades suggests an overall agreement that the paternalistic approach in health care is inappropriate, and a collaborative process with patients should be adopted. Although the physicians in this study stated that they, in principle, were in favor of patient participation, the analysis found little support in their descriptions of their daily practice that participation is actualized. As seen from the results, paternalistic practices are still present, even if professionals might not be aware of this. This can create a conflict between patients who strive to become more informed and their questions being interpreted as signs of critique and mistrust toward the physician. We thus believe that the full potential of PAEHRs is not reached yet and argue that the concept of patient empowerment is problematic as it triggers an interpretation of "power" in health care as a zero-sum, which is not helpful for the maintenance of the relationship between the actors. Patient involvement is often discussed merely in relation to decision making; however, this study emphasizes the need to include also sensemaking and learning activities. This would provide an alternative understanding of patients asking questions, not in terms of "monitoring the doctor" but to make sense of the situation.
A Network-Centric approach enables systems to be interconnected in a dynamic and flexible architecture to support multi-lateral, civilian and military missions. Constantly changing environments require commanders to plan for more flexible missions that allow organizations from various nations and agencies to join or separate from the teams performing the missions, depending on the situation. The uncertainty inherent in an actual mission, and the variety of potential organizations that support the mission after it is underway, makes Command Intent (CI) a critical concept for the mission team. Both humans and computerized decision support services need to have the ability to communicate and interpret a shared CI. This paper presents the Operations Intent and Effects Model (OIEM) - a model that relates CI to Effects, and supports both traditional military planning and Effects Based Operation. In the provided example the suggested Command and Control Language is used to express Operations Intent and Effects. © 2009 IEEE.
Svärm-AI är en variant av artificiell intelligens (AI) som används för att simulera ett bettende liknande det man kan se i svärmar, flockar eller stim. Syftet med detta arbete är att undersöka ifall man kan använda svärm-AI för att förbättra en AI i ett top down shooter spel där två lag slåss mot varandra. Ett program skapades för att testa detta där de två lagen båda var datorstyrda enheter och fick slåss mot varandra för att samla mätvärden. Resultaten visar att svärm-AI presterade bättre i labyrintliknande banor utan stora tomma områden. De visade även att svärm-AI använde betydligt mycket mer prestanda.
The System Architecture Virtual Integration (SAVI) initiative is a multiyear, multimillion dollar program that is developing the capability to virtually integrate systems before designs are implemented and tested on hardware. The purpose of SAVI is to develop a means of countering the costs of exponentially increasing complexity in modern aerospace software systems. The program is sponsored by the Aerospace Vehicle Systems Institute, a research center of the Texas Engineering Experiment Station, which is a member of the Texas A&M University System. This report presents an analysis of the economic effects of the SAVI approach on the development of software-reliant systems for aircraft compared to existing development paradigms. The report describes the detailed inputs and results of a return-on-investment (ROI) analysis to determine the net present value of the investment in the SAVI approach. The ROI is based on rework cost-avoidance attributed to earlier discovery of requirements errors through analysis of virtually integrated models of the embedded software system expressed in the SAE International Architecture Analysis and Design Language (AADL) standard architecture modeling language. The ROI analysis uses conservative estimates of costs and benefits, especially for those parameters that have a proven, strong correlation to overall system-development cost. The results of the analysis, in part, show that the nominal cost reduction for a system that contains 27 million source lines of code would be $2.391 billion (out of an estimated $9.176 billion), a 26.1% cost savings. The original study, reported here, had a follow-on study to validate and further refine the estimated cost savings.
Denna rapport tar upp tester av renderingstider av olika Canvas objekt. Objeken är unika från varandra och pressar olika webbläsare för att ta reda på information om renderingsmotorn i webbläsaren.
The implementation of a Course Management System into an educational institution oriented to students with learning disabilities such as ADHD, represents a big challenge since these students experience persistent impairments in attention (or concentration) that impact negatively on their learning outcomes, engagement and motivation. It’s crucial to adapt and enhance these environments having in consideration the students’ special learning needs, in order to improve their user experience and engagement during their learning process. This thesis address the design and development of gamified layer that brings a current analog gamification practice into a Course Management System Environment, Google Classroom (GC). The prototype developed retrieves, transforms and shows the GC data in form of game elements such as points, badges, and progress bars, among others. After using the prototype during three weeks, the students showed an easy familiarization with the gamified layer of GC and an active participation and persistence during their course activities.
NEAT är en neuroevolutionsteknik som kan användas för att träna upp AI-kontrollerade robotar utan att behöva tillföra någon mänsklig expertis eller tidigare kunskap till systemet. Detta arbete undersöker hur väl denna teknik fungerar tillsammans med samevolution för att utveckla robotar i en tävlingsmiljö, med fokus på att testa tekniken på flera olika nivåer med varierande mängd komplexitet i form av väggar och hinder.
Tekniken utvärderas genom att låta robotarna tävla mot varandra, deras kompetens mäts sedan från resultaten av dessa tävlingar. Exempelvis deras förmåga att vinna matcher.
Resultaten visar att tekniken fungerade bra på nivån med låg komplexitet, men att robotarna har vissa svårigheter att lära sig kompetenta strategier på nivåerna med högre komplexitet. Tekniken har dock potential för flera olika varianter och förbättringar som potentiellt kan förbättra resultatet även på de mer komplexa nivåerna.
User participation is seen as an important enabler for successful public e-service development. However, at the same time development of public e-services is still often characterised by an internal government perspective with little consideration for external users’ perspectives. This paper challenges the overly positive attitude that is surrounding user participation in e-government research. The paper aims to illustrate and problematize various aspects that influence why, how, and in whose interest user participation is conducted in public e-service development. First, via a literature review, we identify a set of dimensions for critically exploring how, why, and in whose interest user participation is conducted in public e-service development projects. Second, we use these dimensions in order to characterise and analyse three empirical public e-service development cases in order to test the utility, usefulness, and feasibility of the identified dimensions. Our findings highlight the importance of questioning and elaborating on the motives behind user participation (the why) in public e-service development. We also identify two basic forms of how user participation is addressed in public e-service development projects: 1) veneered participation, and 2) ad-hoc participation. Furthermore, we argue that any decisions made regarding user participation in public e-service development should be based on conscious and informed choices concerning why user participation is needed and what it may bring for different stakeholders and their interests.
This article addresses the question ‘what considerations should be taken by cyber commands when designing attack infrastructure for offensive operations?’. Nation-states are investing in equipping units tasked to conduct offensive cyberspace operations. Generating ‘deny, degrade, disrupt, destroy or deceive’ effects on adversary targets requires to move from own (‘green’), through neutral (‘grey’), to adversary (‘red’) cyberspace. The movement is supported by attack infrastructure for offensive cyberspace operations. In this paper, we review the professional and scientific literature identifying the requirements for designing an attack infrastructure. Next, we develop and define the concepts for attack infrastructure. Finally, we explain and describe the considerations for designing attack infrastructure. The research question is answered by proposing a framework for designing attack infrastructure. This framework is vital for military and civilian commands designing attack infrastructure for offensive cyberspace operations.
The Executive Order 13800—Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure—issued by the President of the United States, calls for an evaluation of the “readiness and gaps in the United States’ ability to manage and mitigate consequences of a cyber incident against the electricity subsector.” In May of 2018, the Office of Management and Budget finished evaluating the 96 risk assessments conducted by various agencies and published Federal Cybersecurity Risk Determination Report and Action Plan (Risk Report). While the report embraced a broad defending forward strategy, it made no reference to smart grids or their vulnerable intelligent substations and did not address how federal agencies plan to respond to emerging threats to these systems. While the paper does not discuss how to attack the smart grids in the cyber domain, the contribution to the academic debate lies in validating some of the vulnerabilities of the grid’s substations in order for government, private industry, academia, and civil society to better collaborate in disrupting or halting malicious cyber activities before they disrupt the power supply of the United States and its Transatlantic allies. We also discuss how Artificial Intelligence and related techniques can mitigate security risks to cyber-physical systems. Until this technology becomes available, however, standardization of cyber security efforts must be enforced through regulatory means, such as the enforcement of security-by-design Intelligent Electronic Devices and protocols for the smart grid.
In the digitalisation era, overlaying digital, contextualised information on top of the physical world is essential for an efficient operation. Mixed reality (MR) is a technology designed for this purpose, and it is considered one of the critical drivers of Industry 4.0. This technology has proven to have multiple benefits in the manufacturing area, including improving flexibility, efficacy, and efficiency. Among the challenges that prevent the big-scale implementation of this technology, there is the authoring challenge, which we address by answering the following research questions: (1) “how can we fasten MR authoring in a manufacturing context?” and (2) “how can we reduce the deployment time of industrial MR experiences?”. This paper presents an experiment performed in collaboration with Volvo within the remanufacturing of truck engines. MR seems to be more valuable for remanufacturing than for many other applications in the manufacturing industry, and the authoring challenge appears to be accentuated. In this experiment, product lifecycle management (PLM) tools are used along with internet of things (IoT) platforms and MR devices. This joint system is designed to keep the information up-to-date and ready to be used when needed. Having all the necessary data cascading from the PLM platform to the MR device using IoT prevents information silos and improves the system’s overall reliability. Results from the experiment show how the interconnection of information systems can significantly reduce development and deployment time. Experiment findings include a considerable increment in the complexity of the overall IT system, the need for substantial investment in it, and the necessity of having highly qualified IT staff. The main contribution of this paper is a systematic approach to the design of industrial MR experiences.
Introduktionen av AJAX har möjliggjort skapandet av dynamiska webbapplikationer på Internet. Dessa dynamiska webbapplikationer bygger på att skicka data mellan klient och server med hjälp av asynkrona förfrågningar. Detta görs med hjälp av ett serialiseringsformat som används för att kompaktera data och möjliggöra kommunikation mellan olika programmeringsspråk. Binära serialiseringsformat har konsistent visat sig prestera bättre än de textbaserade alternativen på plattformar som tillåter binär transport av data. AJAX är en plattform som endast tillåter textbaserad transport av data. Detta arbete syftar till att jämföra prestandan av textbaserade och binära serialiseringsformat för AJAX med hjälp av bland annat round-trip time. Arbetet har utförts genom att skapa en webbapplikation som utför prestandamätningar med hjälp av olika datamängder med varierande storlek och innehåll. Resultaten visar att binära serialiseringsformat endast skulle kunna vara ett tänkbart alternativ när det rör sig om webbapplikationer som skickar mycket sifferdominant data.
Researchers have proposed several automated diagnostic systems based on machine learning and data mining techniques to predict heart failure. However, researchers have not paid close attention to predicting cardiac patient mortality. We developed a clinical decision support system for predicting mortality in cardiac patients to address this problem. The dataset collected for the experimental purposes of the proposed model consisted of 55 features with a total of 368 samples. We found that the classes in the dataset were highly imbalanced. To avoid the problem of bias in the machine learning model, we used the synthetic minority oversampling technique (SMOTE). After balancing the classes in the dataset, the newly proposed system employed a (Formula presented.) statistical model to rank the features from the dataset. The highest-ranked features were fed into an optimized random forest (RF) model for classification. The hyperparameters of the RF classifier were optimized using a grid search algorithm. The performance of the newly proposed model ((Formula presented.) _RF) was validated using several evaluation measures, including accuracy, sensitivity, specificity, F1 score, and a receiver operating characteristic (ROC) curve. With only 10 features from the dataset, the proposed model (Formula presented.) _RF achieved the highest accuracy of 94.59%. The proposed model (Formula presented.) _RF improved the performance of the standard RF model by 5.5%. Moreover, the proposed model (Formula presented.) _RF was compared with other state-of-the-art machine learning models. The experimental results show that the newly proposed decision support system outperforms the other machine learning systems using the same feature selection module ((Formula presented.)).
This chapter provides a practical guide on how to use the meta datarepository ConceptBase to design information modeling methods by using meta-modeling. After motivating the abstraction principles behind meta-modeling, thelanguage Telos as realized in ConceptBase is presented. First, a standard factualrepresentation of statements at any IRDS abstraction level is defined. Then, thefoundation of Telos as a logical theory is elaborated yielding simple fixpointsemantics. The principles for object naming, instantiation, attribution, andspecialization are reflected by roughly 30 logical axioms. After the languageaxiomatization, user-defined rules, constraints and queries are introduced. The firstpart is concluded by a description of active rules that allows the specification ofreactions of ConceptBase to external events. The second part applies the languagefeatures of the first part to a full-fledged information modeling method: The Yourdanmethod for Modern Structured Analysis. The notations of the Yourdan method aredesigned along the IRDS framework. Intra-notational and inter-notational constraintsare mapped to queries. The development life cycle is encoded as a software processmodel closely related to the modeling notations. Finally, aspects managing themodeling activities are addressed by metric definitions.
The 30th International Conference on Conceptual Modeling (ER-2011) highlighted the strong and persistent interest in research on conceptual modeling for developing information systems. Topics included data modeling theory, goal modeling, socio-technical factors, requirements engineering, process modeling, and ontologies.
The rapid advances in information and communication technology enable a shift from diverse systems empowered mainly by either hardware or software to cyber-physical systems (CPSs) that are driving Critical infrastructures (CIs), such as energy and manufacturing systems. However, alongside the expected enhancements in efficiency and reliability, the induced connectivity exposes these CIs to cyberattacks exemplified by Stuxnet and WannaCry ransomware cyber incidents. Therefore, the need to improve cybersecurity expectations of CIs through vulnerability assessments cannot be overstated. Yet, CI cybersecurity has intrinsic challenges due to the convergence of information technology (IT) and operational technology (OT) as well as the crosslayer dependencies that are inherent to CPS based CIs. Different IT and OT security terminologies also lead to ambiguities induced by knowledge gaps in CI cybersecurity. Moreover, current vulnerability-assessment processes in CIs are mostly subjective and human-centered. The imprecise nature of manual vulnerability assessment operations and the massive volume of data cause an unbearable burden for security analysts. Latest advances in machine-learning (ML) based cybersecurity solutions promise to shift such burden onto digital alternatives. Nevertheless, the heterogeneity, diversity and information gaps in existing vulnerability data repositories hamper accurate assessments anticipated by these ML-based approaches. Therefore, a comprehensive approach is envisioned in this thesis to unleash the power of ML advances while still involving human operators in assessing cybersecurity vulnerabilities within deployed CI networks.Specifically, this thesis proposes data-driven cybersecurity indicators to bridge vulnerability management gaps induced by ad-hoc and subjective auditing processes as well as to increase the level of automation in vulnerability analysis. The proposed methodology follows design science research principles to support the development and validation of scientifically-sound artifacts. More specifically, the proposed data-driven cybersecurity architecture orchestrates a range of modules that include: (i) a vulnerability data model that captures a variety of publicly accessible cybersecurity-related data sources; (ii) an ensemble-based ML pipeline method that self-adjusts to the best learning models for given cybersecurity tasks; and (iii) a knowledge taxonomy and its instantiated power grid and manufacturing models that capture CI common semantics of cyberphysical functional dependencies across CI networks in critical societal domains. This research contributes data-driven vulnerability analysis approaches that bridge the knowledge gaps among different security functions, such as vulnerability management through related reports analysis. This thesis also correlates vulnerability analysis findings to coordinate mitigation responses in complex CIs. More specifically, the vulnerability data model expands the vulnerability knowledge scope and curates meaningful contexts for vulnerability analysis processes. The proposed ML methods fill information gaps in vulnerability repositories using curated data while further streamlining vulnerability assessment processes. Moreover, the CI security taxonomy provides disciplined and coherent support to specify and group semanticallyrelated components and coordination mechanisms in order to harness the notorious complexity of CI networks such as those prevalent in power grids and manufacturing infrastructures. These approaches learn through interactive processes to proactively detect and analyze vulnerabilities while facilitating actionable insights for security actors to make informed decisions.
Current vulnerability scoring mechanisms in complex cyber-physical systems (CPSs) face challenges induced by the proliferation of both component versions and recurring scoring-mechanism versions. Different data-repository sources like National Vulnerability Database (NVD), vendor websites as well as third party security tool analysers (e.g. ICS CERT and VulDB) may provide conflicting severity scores. We propose a machine-learning pipeline mechanism to compute vulnerability severity scores automatically. This method also discovers score correlations from established sources to infer and enhance the severity consistency of reported vulnerabilities. To evaluate our approach, we show through a CPS-based case study how our proposed scoring system automatically synthesises accurate scores for some vulnerability instances, to support remediation decision-making processes. In this case study, we also analyse the characteristics of CPS vulnerability instances.
Modern security practices promote quantitative methods to provide prioritisation insights and support predictive analysis, which is supported by open-source cybersecurity databases such as the Common Vulnerabilities and Exposures (CVE), the National Vulnerability Database (NVD), CERT, and vendor websites. These public repositories provide a way to standardise and share up-to-date vulnerability information, with the purpose to enhance cybersecurity awareness. However, data quality issues of these vulnerability repositories may lead to incorrect prioritisation and misemployment of resources. In this paper, we aim to empirically analyse the data quality impact of vulnerability repositories for actual information technology (IT) and operating technology (OT) systems, especially on data inconsistency. Our case study shows that data inconsistency may misdirect investment of cybersecurity resources. Instead, correlated vulnerability repositories and trustworthiness data verification bring substantial benefits for vulnerability management.
Operating a popular website is a challenging task. Users not only expect services to always be available, but also good performance in the form of fast response times. To achieve high availability and avoid performance problems which can be linked to user satisfaction and financial losses, the ability to balance web server traffic between servers is an important aspect.
This study is aimed to evaluate performance aspects of popular open-source load balancing software working at the HTTP layer. The study includes the well-known load balancers HAProxy and NGINX but also Traefik and Envoy which have become popular more recently by offering native integration with container orchestrators. To find performance differences, an experiment was designed with two load scenarios using Apache JMeter to measure the throughput of requests and response times with a varying number of simulated users.
The experiment was able to consistently show performance differences between the software in both scenarios. It was found that HAProxy overall had the best performance in both scenarios and could handle test cases with 1000 users where the other load balancers began generating a large proportion of failed connections significantly better. NGINX was the slowest when considering all test cases from both scenarios. Averaging results from both load scenarios excluding tests at the highest, 1000 users, concurrency level, Traefik performed 24% better, Envoy 27% better and HAProxy 36% better compared to NGINX.